You are viewing a plain text version of this content. The canonical link for it is here.
Posted to builds@beam.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2022/01/13 00:49:32 UTC

Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2906

See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2906/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13616][BEAM-13646] Update vendored calcite 1.28.0 with protobuf

[chamikaramj] Adds several example multi-language Python pipelines

[noreply] Merge pull request #16325 from [BEAM-13471] [Playground] Tag existing


------------------------------------------
[...truncated 345.79 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0ac448a07d52d2f247c50a9e53e6290e
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 13, 2022 12:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 13, 2022 12:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 13, 2022 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 13, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 13, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 13, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 13, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 13, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 13, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 13, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 13, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1046044067]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 13, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 13, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 13, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 13, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 13, 2022 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 13, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 13, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 13, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5090990917571992812.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-3TS4D002-BrlvizKZjgq1Nf7wfP8mDDR8P3k6xvmUCk.jar
    Jan 13, 2022 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 13, 2022 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 13, 2022 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash 209353bfc9e2d3dfb8dd45acab35e198283a134ea2b3ebb902489c4e65050f30> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-IJNTv8ni09-43UWsqzXhmCg6E06is-u5AkicTmUFDzA.pb
    Jan 13, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 13, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 13, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 13, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 13, 2022 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-12_16_45_06-5676484448457521317?project=apache-beam-testing
    Jan 13, 2022 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-12_16_45_06-5676484448457521317
    Jan 13, 2022 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-12_16_45_06-5676484448457521317
    Jan 13, 2022 12:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-13T00:45:10.005Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 13, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:45:19.297Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 13, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:45:20.007Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 13, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:45:20.062Z: Expanding GroupByKey operations into optimizable parts.
    Jan 13, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:45:20.094Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 13, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:45:20.174Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 13, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:45:20.231Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 13, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:45:20.271Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 13, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:45:20.728Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 13, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:45:20.819Z: Starting 5 workers in us-central1-a...
    Jan 13, 2022 12:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:45:34.172Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 13, 2022 12:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:46:05.956Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 13, 2022 12:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:46:35.022Z: Workers have started successfully.
    Jan 13, 2022 12:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:46:35.058Z: Workers have started successfully.
    Jan 13, 2022 12:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:47:05.593Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 13, 2022 12:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:47:05.817Z: Cleaning up.
    Jan 13, 2022 12:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:47:05.897Z: Stopping worker pool...
    Jan 13, 2022 12:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:49:24.061Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 13, 2022 12:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T00:49:24.123Z: Worker pool stopped.
    Jan 13, 2022 12:49:30 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-12_16_45_06-5676484448457521317 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 601c139c-aa60-4746-b70c-baa34eac48cf and timestamp: 2022-01-13T00:49:30.070000000Z:
                     Metric:                    Value:
                   read_time                     6.956
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 13, 2022 12:49:30 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 43.371 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/mvze5tgmie7cc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3156

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3156/display/redirect?page=changes>

Changes:

[yiru] .

[yiru] .

[yiru] .

[yiru] format fix

[yiru] .

[yiru] make DoFn into a separate class

[yiru] .

[yiru] fix setting

[yiru] fix checkstyle

[yiru] spotlessapply

[noreply] Update Changes.md w/Go pipeline pre-process fix.

[noreply] [BEAM-14098] wrapper for postgres on JDBC IO GO SDK (#17088)

[noreply] Merge pull request #17023 from [BEAM-12164]: Remove child partition


------------------------------------------
[...truncated 362.72 KB...]
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 19, 2022 6:47:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 19, 2022 6:47:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 19, 2022 6:47:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 19, 2022 6:47:23 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 19, 2022 6:47:23 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-pZkKWdtfF4WxMWjUjt_dmRL7z8TSRbxWL8DYyz94Bpg.jar
    Mar 19, 2022 6:47:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.38.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.38.0-SNAPSHOT-tests-CkGJ26g4UGYUw7qzLSacMoy60n8-a5c4gFqb_JGb87Q.jar
    Mar 19, 2022 6:47:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.38.0-SNAPSHOT-6WvkkhsYGk4qZRBCzCCEdUcpUzifj46fScl5woWPDL0.jar
    Mar 19, 2022 6:47:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6435392382934827459.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test--PtcIRIEIQb6_NetNughPYCFwfNKVgtwePYYZus4TL4.jar
    Mar 19, 2022 6:47:26 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.dataformat/jackson-dataformat-xml/2.13.0/396634365e8439b27794fa94b571fdc03b4cf7be/jackson-dataformat-xml-2.13.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jackson-dataformat-xml-2.13.0-Yyu4WPqE_RYmgacP7BZkHcWrY2pzow2igQ0noIVmu7o.jar
    Mar 19, 2022 6:47:26 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.immutables/value/2.8.8/d99fa1e04af5a1fda42fa9412d68eb7fe17a1071/value-2.8.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/value-2.8.8-k_DQ4kEt4FHC2WzIqOzB87qR2WH_Mw81deYbQ6-miFA.jar
    Mar 19, 2022 6:47:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.woodstox/woodstox-core/6.2.6/d4a055af5dcadea8d1bded5b64bc7be5bb7fea95/woodstox-core-6.2.6.jar to gs://temp-storage-for-perf-tests/loadtests/staging/woodstox-core-6.2.6-7RMZid5VnxZ00HVSgs8J3XgTkSFBRgr4IvXvHJtaCuE.jar
    Mar 19, 2022 6:47:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.inject.extensions/guice-servlet/3.0/610cde0e8da5a8b7d8efb8f0b8987466ffebaaf9/guice-servlet-3.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/guice-servlet-3.0-nnKkuFgoiNU8L0KX6TJ2o8FMgogBJEkPLaexap3xxhg.jar
    Mar 19, 2022 6:47:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mortbay.jetty/servlet-api/2.5-20081211/22bff70037e1e6fa7e6413149489552ee2064702/servlet-api-2.5-20081211.jar to gs://temp-storage-for-perf-tests/loadtests/staging/servlet-api-2.5-20081211-BodWCWmW_gD2BKw7ZnLW9mPcd36kqDBW4kDQRW535HI.jar
    Mar 19, 2022 6:47:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 364 files cached, 8 files newly uploaded in 4 seconds
    Mar 19, 2022 6:47:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 19, 2022 6:47:28 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash 159339a71db2e289323fa96f48dad993b6064a2514e17e477835f45607ba6956> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-FZM5px2y4okyP6lvSNrZk7YGSiUU4X5HeDX0Vge6aVY.pb
    Mar 19, 2022 6:47:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 19, 2022 6:47:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 19, 2022 6:47:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 19, 2022 6:47:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 19, 2022 6:47:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-18_23_47_35-16535822955029889034?project=apache-beam-testing
    Mar 19, 2022 6:47:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-18_23_47_35-16535822955029889034
    Mar 19, 2022 6:47:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-18_23_47_35-16535822955029889034
    Mar 19, 2022 6:47:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-19T06:47:37.322Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 19, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:47:48.420Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 19, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:47:49.107Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 19, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:47:49.139Z: Expanding GroupByKey operations into optimizable parts.
    Mar 19, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:47:49.179Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 19, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:47:49.254Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 19, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:47:49.278Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 19, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:47:49.311Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 19, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:47:49.712Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 19, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:47:49.818Z: Starting 5 workers in us-central1-a...
    Mar 19, 2022 6:47:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:47:56.961Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 19, 2022 6:48:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:48:24.055Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Mar 19, 2022 6:48:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:48:24.092Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Mar 19, 2022 6:48:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:48:34.456Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 19, 2022 6:48:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:48:56.615Z: Workers have started successfully.
    Mar 19, 2022 6:49:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-19T06:49:29.753Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGNoX2o0dlVNMXJEYxoCamQaAmly/streams/CAIaAmpkGgJpciC8tqG6BigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGNoX2o0dlVNMXJEYxoCamQaAmly/streams/CAIaAmpkGgJpciC8tqG6BigC': offset 75870 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGNoX2o0dlVNMXJEYxoCamQaAmly/streams/CAIaAmpkGgJpciC8tqG6BigC': offset 75870 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 19, 2022 6:49:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-19T06:49:30.760Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGNoX2o0dlVNMXJEYxoCamQaAmly/streams/CAUaAmpkGgJpciCnsbKdBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGNoX2o0dlVNMXJEYxoCamQaAmly/streams/CAUaAmpkGgJpciCnsbKdBSgC': offset 93103 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGNoX2o0dlVNMXJEYxoCamQaAmly/streams/CAUaAmpkGgJpciCnsbKdBSgC': offset 93103 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 19, 2022 6:49:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:49:33.634Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 19, 2022 6:49:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:49:33.811Z: Cleaning up.
    Mar 19, 2022 6:49:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:49:33.907Z: Stopping worker pool...
    Mar 19, 2022 6:51:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:51:49.725Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 19, 2022 6:51:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T06:51:49.770Z: Worker pool stopped.
    Mar 19, 2022 6:51:55 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-18_23_47_35-16535822955029889034 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 22a17ed3-7547-4ccd-9403-d6226e15d3bf and timestamp: 2022-03-19T06:51:55.107000000Z:
                     Metric:                    Value:
                   read_time                    10.308
                 fields_read                 4632048.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 19, 2022 6:51:55 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 5 mins 7.63 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 13s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/qvpglvcfrlwyo

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3155

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3155/display/redirect?page=changes>

Changes:

[Kiley Sok] Add Java 17 Nexmark metrics to Grafana

[noreply] [BEAM-14128] Eliminating quadratic behavior of

[noreply] [BEAM-13972] Add RunInference interface (#16917)

[noreply] Merge pull request #17116 from [BEAM-12164] Remove change_stream in

[noreply] Deprecate tags.go (#17025)

[noreply] [BEAM-12753] and [BEAM-12815] Fix Flink Integration Tests (#17067)

[noreply] Merge pull request #16895 from [BEAM-13882][Playground] More tests for

[noreply] [BEAM-13925] Add weekly automation to update our reviewer config

[noreply] Merge pull request #17076 from Beam 14082 update payground for mobile

[noreply] [BEAM-13925] Assign committers in the scheduled action (#17062)

[noreply] Pin setup-gcloud to v0 instead of master (#17123)

[noreply] [BEAM-3304] documentation for PaneInfo in BPG (#17047)

[noreply] Merge pull request #17016 from [BEAM-14049][Playground] Add new API

[noreply] Merge pull request #17077 from [BEAM-14078] [Website] change link

[noreply] Merge pull request #17085 from [BEAM-14077] [Website] add beam


------------------------------------------
[...truncated 356.80 KB...]
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 19, 2022 12:52:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 19, 2022 12:52:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 19, 2022 12:52:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 19, 2022 12:52:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 19, 2022 12:52:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 19, 2022 12:52:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 19, 2022 12:52:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@15076442]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 19, 2022 12:52:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 19, 2022 12:52:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 19, 2022 12:52:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 19, 2022 12:52:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 19, 2022 12:52:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 19, 2022 12:52:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 19, 2022 12:52:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 19, 2022 12:52:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 19, 2022 12:52:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 19, 2022 12:52:15 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 19, 2022 12:52:15 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-pZkKWdtfF4WxMWjUjt_dmRL7z8TSRbxWL8DYyz94Bpg.jar
    Mar 19, 2022 12:52:15 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1366616127741074199.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-NMZIWafthvNusa-BvbFJKrayZNoDqniOKk4gq0qFOoA.jar
    Mar 19, 2022 12:52:15 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-aMAjN1-qWRIfqycf4eeVWOEU2x77hLGEXQYi0ALeTIM.jar
    Mar 19, 2022 12:52:15 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 370 files cached, 2 files newly uploaded in 0 seconds
    Mar 19, 2022 12:52:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 19, 2022 12:52:15 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash 69245b5979088cbcb079d69c6cf7a73eb5977d23c19438f1aa76372012cd489c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-aSRbWXkIjLywedacbPenPrWXfSPBlDjxqnY3IBLNSJw.pb
    Mar 19, 2022 12:52:19 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 19, 2022 12:52:19 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 19, 2022 12:52:19 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 19, 2022 12:52:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 19, 2022 12:52:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-18_17_52_19-3188878448261398117?project=apache-beam-testing
    Mar 19, 2022 12:52:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-18_17_52_19-3188878448261398117
    Mar 19, 2022 12:52:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-18_17_52_19-3188878448261398117
    Mar 19, 2022 12:52:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-19T00:52:20.932Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 19, 2022 12:52:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:52:30.324Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 19, 2022 12:52:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:52:30.919Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 19, 2022 12:52:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:52:30.947Z: Expanding GroupByKey operations into optimizable parts.
    Mar 19, 2022 12:52:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:52:30.968Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 19, 2022 12:52:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:52:31.026Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 19, 2022 12:52:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:52:31.048Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 19, 2022 12:52:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:52:31.070Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 19, 2022 12:52:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:52:31.398Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 19, 2022 12:52:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:52:31.479Z: Starting 5 workers in us-central1-b...
    Mar 19, 2022 12:52:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:52:52.996Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 19, 2022 12:53:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:53:16.435Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 19, 2022 12:53:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:53:42.408Z: Workers have started successfully.
    Mar 19, 2022 12:53:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:53:42.437Z: Workers have started successfully.
    Mar 19, 2022 12:54:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-19T00:54:14.013Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG43TC1DTVhiaW1iaBoCamQaAmly/streams/CAEaAmpkGgJpciCsqZ6QBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG43TC1DTVhiaW1iaBoCamQaAmly/streams/CAEaAmpkGgJpciCsqZ6QBigC': offset 127367 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG43TC1DTVhiaW1iaBoCamQaAmly/streams/CAEaAmpkGgJpciCsqZ6QBigC': offset 127367 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 19, 2022 12:54:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:54:16.766Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 19, 2022 12:54:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:54:16.932Z: Cleaning up.
    Mar 19, 2022 12:54:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:54:17.024Z: Stopping worker pool...
    Mar 19, 2022 12:56:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:56:41.868Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 19, 2022 12:56:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-19T00:56:41.919Z: Worker pool stopped.
    Mar 19, 2022 12:56:48 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-18_17_52_19-3188878448261398117 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f73ab43d-e46a-42a6-993d-3632c85c49a2 and timestamp: 2022-03-19T00:56:48.917000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.011

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 19, 2022 12:56:49 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 221 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.006 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.004 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 53.512 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 43s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/6o7vgc375ilhe

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3154

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3154/display/redirect?page=changes>

Changes:

[mmack] [adhoc] Minor cleanup for aws2 tests

[mmack] [BEAM-14125] Update website IO matrix to recommend aws2 IOs


------------------------------------------
[...truncated 366.13 KB...]
    Mar 18, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:45:28.776Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 18, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:45:29.810Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 18, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:45:29.860Z: Expanding GroupByKey operations into optimizable parts.
    Mar 18, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:45:29.887Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 18, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:45:29.971Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 18, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:45:30.004Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 18, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:45:30.038Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 18, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:45:30.479Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 18, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:45:30.567Z: Starting 5 workers in us-central1-f...
    Mar 18, 2022 6:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:45:58.760Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 18, 2022 6:46:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:46:02.229Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Mar 18, 2022 6:46:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:46:02.266Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Mar 18, 2022 6:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:46:12.516Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 18, 2022 6:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:46:36.868Z: Workers have started successfully.
    Mar 18, 2022 6:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:46:36.900Z: Workers have started successfully.
    Mar 18, 2022 6:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-18T18:47:08.069Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFdibG5DZTl1bFVoahoCamQaAmly/streams/CAYaAmpkGgJpciCG_PihBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFdibG5DZTl1bFVoahoCamQaAmly/streams/CAYaAmpkGgJpciCG_PihBygC': offset 67392 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFdibG5DZTl1bFVoahoCamQaAmly/streams/CAYaAmpkGgJpciCG_PihBygC': offset 67392 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 18, 2022 6:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-18T18:47:08.092Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFdibG5DZTl1bFVoahoCamQaAmly/streams/CAcaAmpkGgJpciCPyNnUAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFdibG5DZTl1bFVoahoCamQaAmly/streams/CAcaAmpkGgJpciCPyNnUAygC': offset 102316 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFdibG5DZTl1bFVoahoCamQaAmly/streams/CAcaAmpkGgJpciCPyNnUAygC': offset 102316 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 18, 2022 6:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-18T18:47:08.182Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFdibG5DZTl1bFVoahoCamQaAmly/streams/CAUaAmpkGgJpciDmp_aAAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFdibG5DZTl1bFVoahoCamQaAmly/streams/CAUaAmpkGgJpciDmp_aAAigC': offset 128749 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFdibG5DZTl1bFVoahoCamQaAmly/streams/CAUaAmpkGgJpciDmp_aAAigC': offset 128749 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 18, 2022 6:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:47:09.485Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 18, 2022 6:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:47:09.684Z: Cleaning up.
    Mar 18, 2022 6:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:47:09.821Z: Stopping worker pool...
    Mar 18, 2022 6:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:49:28.728Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 18, 2022 6:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T18:49:28.876Z: Worker pool stopped.
    Mar 18, 2022 6:49:40 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-18_11_45_17-735227259236668755 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7c0555ff-1cee-4c1b-b677-0dce02d8acc9 and timestamp: 2022-03-18T18:49:40.194000000Z:
                     Metric:                    Value:
                   read_time                     9.692
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 18, 2022 6:49:40 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 44.085 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 20s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/3fhlgrdi5ph5s

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3153

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3153/display/redirect>

Changes:


------------------------------------------
[...truncated 357.72 KB...]
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 18, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 18, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 18, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 18, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 18, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 18, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 18, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 18, 2022 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 18, 2022 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 18, 2022 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-pZkKWdtfF4WxMWjUjt_dmRL7z8TSRbxWL8DYyz94Bpg.jar
    Mar 18, 2022 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3755946534246641091.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-vmyXXb80OdZlkoph-yEkBZbjfODZZDYEjsFvFrDTxP0.jar
    Mar 18, 2022 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 18, 2022 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 18, 2022 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145672 bytes, hash 8e25be65ec2179053885496117e732f86229a7558580d553271856e3bc8c966d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-jiW-ZewheQU4hUlhF-cy-GIpp1WFgNVTJxhW47yMlm0.pb
    Mar 18, 2022 12:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 18, 2022 12:45:22 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 18, 2022 12:45:22 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 18, 2022 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 18, 2022 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-18_05_45_22-18117282414129295250?project=apache-beam-testing
    Mar 18, 2022 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-18_05_45_22-18117282414129295250
    Mar 18, 2022 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-18_05_45_22-18117282414129295250
    Mar 18, 2022 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-18T12:45:24.103Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 18, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:45:32.934Z: Worker configuration: e2-standard-2 in us-central1-c.
    Mar 18, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:45:33.553Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 18, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:45:33.594Z: Expanding GroupByKey operations into optimizable parts.
    Mar 18, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:45:33.621Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 18, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:45:33.682Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 18, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:45:33.711Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 18, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:45:33.735Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 18, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:45:34.133Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 18, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:45:34.205Z: Starting 5 workers in us-central1-c...
    Mar 18, 2022 12:45:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:45:57.081Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 18, 2022 12:46:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:46:13.135Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Mar 18, 2022 12:46:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:46:13.154Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Mar 18, 2022 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:46:23.350Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 18, 2022 12:46:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:46:39.234Z: Workers have started successfully.
    Mar 18, 2022 12:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-18T12:47:09.846Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGw2Y0NobnJFbVVqMBoCamQaAmly/streams/CAEaAmpkGgJpciCi8dqZAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGw2Y0NobnJFbVVqMBoCamQaAmly/streams/CAEaAmpkGgJpciCi8dqZAygC': offset 76803 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGw2Y0NobnJFbVVqMBoCamQaAmly/streams/CAEaAmpkGgJpciCi8dqZAygC': offset 76803 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 18, 2022 12:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-18T12:47:09.854Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGw2Y0NobnJFbVVqMBoCamQaAmly/streams/CAcaAmpkGgJpciCUzZrlAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGw2Y0NobnJFbVVqMBoCamQaAmly/streams/CAcaAmpkGgJpciCUzZrlAygC': offset 96452 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGw2Y0NobnJFbVVqMBoCamQaAmly/streams/CAcaAmpkGgJpciCUzZrlAygC': offset 96452 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 18, 2022 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:47:12.053Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 18, 2022 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:47:12.218Z: Cleaning up.
    Mar 18, 2022 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:47:12.296Z: Stopping worker pool...
    Mar 18, 2022 12:49:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:49:32.966Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 18, 2022 12:49:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T12:49:33.002Z: Worker pool stopped.
    Mar 18, 2022 12:49:38 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-18_05_45_22-18117282414129295250 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 95ff2954-cdd0-4d75-8692-2da93d4fbeaa and timestamp: 2022-03-18T12:49:38.342000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.411

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 18, 2022 12:49:38 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 35.598 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 1s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/sqthrr5mjsxye

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3152

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3152/display/redirect?page=changes>

Changes:

[Luke Cwik] [BEAM-10212] Clean-up comments, remove rawtypes usage.

[noreply] [BEAM-13893] improved coverage of jobopts package (#17003)

[noreply] Merge pull request #16977 from [BEAM-12164]  Added integration test for


------------------------------------------
[...truncated 350.36 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 48d5fb3f445a8dde0e91ababd941df1b
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 18, 2022 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 18, 2022 6:45:17 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 18, 2022 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 371 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 18, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 18, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 18, 2022 6:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 18, 2022 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 18, 2022 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 18, 2022 6:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 18, 2022 6:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1293369737]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 18, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 18, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 18, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 18, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 18, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 18, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 18, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@15076442]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 18, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 18, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 18, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 18, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 18, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 18, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 18, 2022 6:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 18, 2022 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 18, 2022 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 18, 2022 6:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 18, 2022 6:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-pZkKWdtfF4WxMWjUjt_dmRL7z8TSRbxWL8DYyz94Bpg.jar
    Mar 18, 2022 6:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test690326559929309774.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-xjQ3RPx65OKZDGkflLX80zvvKy4olIqxOySr0z38EFc.jar
    Mar 18, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 18, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 18, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash 60a68436cca79565f3c45b8ff69eea83329d034a935792bb8f8213c9f3f0412a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-YKaENsynlWXzxFuP9p7qgzKdA0qTV5K7j4ITyfPwQSo.pb
    Mar 18, 2022 6:45:33 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 18, 2022 6:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 18, 2022 6:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 18, 2022 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 18, 2022 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-17_23_45_34-8195452588900207619?project=apache-beam-testing
    Mar 18, 2022 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-17_23_45_34-8195452588900207619
    Mar 18, 2022 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-17_23_45_34-8195452588900207619
    Mar 18, 2022 6:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-18T06:45:35.482Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 18, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:45:44.399Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 18, 2022 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:45:45.217Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 18, 2022 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:45:45.256Z: Expanding GroupByKey operations into optimizable parts.
    Mar 18, 2022 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:45:45.283Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 18, 2022 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:45:45.360Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 18, 2022 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:45:45.385Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 18, 2022 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:45:45.421Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 18, 2022 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:45:45.779Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 18, 2022 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:45:45.862Z: Starting 5 workers in us-central1-f...
    Mar 18, 2022 6:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:46:01.904Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 18, 2022 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:46:37.408Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 18, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:47:04.746Z: Workers have started successfully.
    Mar 18, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:47:04.785Z: Workers have started successfully.
    Mar 18, 2022 6:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:47:42.135Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 18, 2022 6:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:47:42.254Z: Cleaning up.
    Mar 18, 2022 6:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:47:42.328Z: Stopping worker pool...
    Mar 18, 2022 6:50:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:50:00.996Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 18, 2022 6:50:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T06:50:01.036Z: Worker pool stopped.
    Mar 18, 2022 6:50:06 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-17_23_45_34-8195452588900207619 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 74a2cad6-3487-41af-aaec-ae5dd842eec2 and timestamp: 2022-03-18T06:50:06.347000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      8.44

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 18, 2022 6:50:06 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 54.15 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 45s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/va4b46gj7ve44

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3151

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3151/display/redirect?page=changes>

Changes:

[noreply] Populate environment capabilities in v1beta3 protos. (#17042)

[Kyle Weaver] [BEAM-12976] Test a whole pipeline using projection pushdown in BQ IO.

[Kyle Weaver] [BEAM-12976] Enable projection pushdown for Java pipelines on Dataflow,

[noreply] [BEAM-14038] Auto-startup for Python expansion service. (#17035)

[Kyle Weaver] [BEAM-14123] Fix typo in hdfsIntegrationTest task name.


------------------------------------------
[...truncated 351.19 KB...]
Compiling with toolchain '/usr/lib/jvm/java-8-openjdk-amd64'.
Compiling with JDK Java compiler API.
Class dependency analysis for incremental compilation took 0.019 secs.
Created classpath snapshot for incremental compilation in 0.358 secs.
Stored cache entry for task ':sdks:java:extensions:sql:perf-tests:compileTestJava' with cache key 993c4c2a3d31a5c6a3d4b12d53793e47
:sdks:java:extensions:sql:perf-tests:compileTestJava (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 3.227 secs.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is fb39ffe01d28ca1d66565ed1e48dda27
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 18, 2022 12:49:20 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 18, 2022 12:49:21 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 18, 2022 12:49:22 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 371 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 18, 2022 12:49:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 18, 2022 12:49:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 18, 2022 12:49:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 18, 2022 12:49:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 18, 2022 12:49:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 18, 2022 12:49:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 18, 2022 12:49:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1293369737]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 18, 2022 12:49:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 18, 2022 12:49:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 18, 2022 12:49:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 18, 2022 12:49:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 18, 2022 12:49:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 18, 2022 12:49:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 18, 2022 12:49:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@15076442]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 18, 2022 12:49:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 18, 2022 12:49:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 18, 2022 12:49:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 18, 2022 12:49:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 18, 2022 12:49:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 18, 2022 12:49:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 18, 2022 12:49:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 18, 2022 12:49:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 18, 2022 12:49:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 18, 2022 12:49:41 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 18, 2022 12:49:41 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-pZkKWdtfF4WxMWjUjt_dmRL7z8TSRbxWL8DYyz94Bpg.jar
    Mar 18, 2022 12:49:41 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5056842382404006177.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-S-Fvfm0bkW0KgFzB6X3Iq8bherkByCqzB0HqfyL2Oik.jar
    Mar 18, 2022 12:49:42 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 1 seconds
    Mar 18, 2022 12:49:42 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 18, 2022 12:49:42 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash 20241f9b9900246c1ef28f2a660801c6d1c045582bda7364c1aeb61872eebbf6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ICQfm5kAJGwe8o8qZggBxtHARVgr2nNkwa62GHLuu_Y.pb
    Mar 18, 2022 12:49:46 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 18, 2022 12:49:46 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 18, 2022 12:49:46 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 18, 2022 12:49:46 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 18, 2022 12:49:47 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-17_17_49_46-12261512404546314326?project=apache-beam-testing
    Mar 18, 2022 12:49:47 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-17_17_49_46-12261512404546314326
    Mar 18, 2022 12:49:47 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-17_17_49_46-12261512404546314326
    Mar 18, 2022 12:49:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-18T00:49:47.728Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 18, 2022 12:49:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T00:49:56.297Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 18, 2022 12:49:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T00:49:56.883Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 18, 2022 12:49:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T00:49:56.931Z: Expanding GroupByKey operations into optimizable parts.
    Mar 18, 2022 12:49:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T00:49:56.949Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 18, 2022 12:49:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T00:49:57.006Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 18, 2022 12:49:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T00:49:57.035Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 18, 2022 12:49:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-18T00:49:57.288Z: Workflow failed. Causes: Workflow failed due to internal error. Please contact Google Cloud Support.
    Mar 18, 2022 12:50:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T00:50:08.916Z: Cleaning up.
    Mar 18, 2022 12:50:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T00:50:08.955Z: Worker pool stopped.
    Mar 18, 2022 12:50:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-18T00:50:09.244Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 18, 2022 12:51:18 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-17_17_49_46-12261512404546314326 failed with status FAILED.
    Mar 18, 2022 12:51:18 AM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 18ce6ce0-f75b-4563-aa51-eb8d11021c55 and timestamp: 2022-03-18T00:51:18.591000000Z:
                     Metric:                    Value:
                   read_time                       0.0
                 fields_read                      -1.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 18, 2022 12:51:18 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 2 mins 2.421 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 3m 37s
165 actionable tasks: 106 executed, 57 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/2kk7vxryhnz6w

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3150

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3150/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11934] Add enable_file_dynamic_sharding to allow DataflowRunner

[noreply] [BEAM-12777] Create symlink for `current` directory (#17105)

[noreply] [BEAM-14020] Adding SchemaTransform, SchemaTransformProvider,

[noreply] [BEAM-13015] Modify metrics to begin and reset to a non-dirty state.

[noreply] [BEAM-14112] Avoid storing a generator in _CustomBigQuerySource (#17100)


------------------------------------------
[...truncated 368.50 KB...]
    Mar 17, 2022 6:48:36 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 17, 2022 6:48:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 17, 2022 6:48:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 17, 2022 6:48:36 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 17, 2022 6:48:37 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 17, 2022 6:48:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 17, 2022 6:48:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 17, 2022 6:48:42 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 17, 2022 6:48:42 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-pZkKWdtfF4WxMWjUjt_dmRL7z8TSRbxWL8DYyz94Bpg.jar
    Mar 17, 2022 6:48:42 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test764638559985978418.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-15PThlpSItH4IcE9tJyEOzgIgP2w4zt_t0DUDQQ6UWc.jar
    Mar 17, 2022 6:48:42 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 17, 2022 6:48:42 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 17, 2022 6:48:43 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash adc2b89919803fd2df6f7dbb631929696fd42c3f9430f2c6a8c703b63faff4ab> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-rcK4mRmAP9Lfb327YxkpaW_ULD-UMPLGqMcDtj-v9Ks.pb
    Mar 17, 2022 6:48:46 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 17, 2022 6:48:46 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 17, 2022 6:48:46 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 17, 2022 6:48:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 17, 2022 6:48:47 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-17_11_48_46-7706226701496817302?project=apache-beam-testing
    Mar 17, 2022 6:48:47 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-17_11_48_46-7706226701496817302
    Mar 17, 2022 6:48:47 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-17_11_48_46-7706226701496817302
    Mar 17, 2022 6:48:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-17T18:48:49.186Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 17, 2022 6:49:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:49:12.554Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 17, 2022 6:49:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:49:14.015Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 17, 2022 6:49:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:49:14.067Z: Expanding GroupByKey operations into optimizable parts.
    Mar 17, 2022 6:49:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:49:14.108Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 17, 2022 6:49:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:49:14.164Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 17, 2022 6:49:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:49:14.195Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 17, 2022 6:49:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:49:14.218Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 17, 2022 6:49:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:49:14.621Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 17, 2022 6:49:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:49:14.695Z: Starting 5 workers in us-central1-a...
    Mar 17, 2022 6:49:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:49:21.149Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 17, 2022 6:49:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:49:46.471Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Mar 17, 2022 6:49:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:49:46.497Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Mar 17, 2022 6:49:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:49:56.795Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 17, 2022 6:50:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:50:18.792Z: Workers have started successfully.
    Mar 17, 2022 6:50:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:50:18.825Z: Workers have started successfully.
    Mar 17, 2022 6:50:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-17T18:50:49.777Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGFhZ016SVNoTlJvehoCaXIaAmpk/streams/CAkaAmlyGgJqZCDamv-HBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGFhZ016SVNoTlJvehoCaXIaAmpk/streams/CAkaAmlyGgJqZCDamv-HBygC': offset 80348 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGFhZ016SVNoTlJvehoCaXIaAmpk/streams/CAkaAmlyGgJqZCDamv-HBygC': offset 80348 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 17, 2022 6:50:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-17T18:50:49.780Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGFhZ016SVNoTlJvehoCaXIaAmpk/streams/CAYaAmlyGgJqZCCg_rm3BygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGFhZ016SVNoTlJvehoCaXIaAmpk/streams/CAYaAmlyGgJqZCCg_rm3BygC': offset 80945 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGFhZ016SVNoTlJvehoCaXIaAmpk/streams/CAYaAmlyGgJqZCCg_rm3BygC': offset 80945 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 17, 2022 6:50:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:50:52.168Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 17, 2022 6:50:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:50:52.298Z: Cleaning up.
    Mar 17, 2022 6:50:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:50:52.377Z: Stopping worker pool...
    Mar 17, 2022 6:53:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:53:26.219Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 17, 2022 6:53:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T18:53:26.257Z: Worker pool stopped.
    Mar 17, 2022 6:53:32 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-17_11_48_46-7706226701496817302 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 19391765-574d-4ede-8d72-effa2c9afa7d and timestamp: 2022-03-17T18:53:32.562000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.791

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 17, 2022 6:53:32 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 5 mins 7.179 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 13s
165 actionable tasks: 106 executed, 57 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/dis6n7zvcungi

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3149

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3149/display/redirect>

Changes:


------------------------------------------
[...truncated 363.65 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-17_05_45_20-16882664164356473874
    Mar 17, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-17T12:45:21.619Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 17, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:45:31.123Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 17, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:45:31.969Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 17, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:45:31.996Z: Expanding GroupByKey operations into optimizable parts.
    Mar 17, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:45:32.014Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 17, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:45:32.115Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 17, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:45:32.144Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 17, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:45:32.175Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 17, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:45:32.580Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 17, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:45:32.642Z: Starting 5 workers in us-central1-f...
    Mar 17, 2022 12:45:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:45:52.999Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 17, 2022 12:46:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:46:15.809Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 17, 2022 12:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:46:41.318Z: Workers have started successfully.
    Mar 17, 2022 12:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:46:41.337Z: Workers have started successfully.
    Mar 17, 2022 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-17T12:47:11.950Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFFaUTlLT0p2aDh5ahoCamQaAmly/streams/CAgaAmpkGgJpciDKz8bRBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFFaUTlLT0p2aDh5ahoCamQaAmly/streams/CAgaAmpkGgJpciDKz8bRBCgC': offset 84694 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFFaUTlLT0p2aDh5ahoCamQaAmly/streams/CAgaAmpkGgJpciDKz8bRBCgC': offset 84694 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 17, 2022 12:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-17T12:47:13.253Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFFaUTlLT0p2aDh5ahoCamQaAmly/streams/CAYaAmpkGgJpciD8q6uVBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFFaUTlLT0p2aDh5ahoCamQaAmly/streams/CAYaAmpkGgJpciD8q6uVBygC': offset 85187 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFFaUTlLT0p2aDh5ahoCamQaAmly/streams/CAYaAmpkGgJpciD8q6uVBygC': offset 85187 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 17, 2022 12:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-17T12:47:14.242Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFFaUTlLT0p2aDh5ahoCamQaAmly/streams/CAUaAmpkGgJpciDL68YnKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFFaUTlLT0p2aDh5ahoCamQaAmly/streams/CAUaAmpkGgJpciDL68YnKAI': offset 80364 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFFaUTlLT0p2aDh5ahoCamQaAmly/streams/CAUaAmpkGgJpciDL68YnKAI': offset 80364 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 17, 2022 12:47:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:47:17.430Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 17, 2022 12:47:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:47:17.577Z: Cleaning up.
    Mar 17, 2022 12:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:47:17.651Z: Stopping worker pool...
    Mar 17, 2022 12:49:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:49:48.569Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 17, 2022 12:49:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T12:49:48.629Z: Worker pool stopped.
    Mar 17, 2022 12:49:56 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-17_05_45_20-16882664164356473874 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a362b4d5-c223-47aa-951e-d336d4cfa0f0 and timestamp: 2022-03-17T12:49:56.719000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    11.043

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 17, 2022 12:49:56 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 874 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.003 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.002 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 5 mins 0.588 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 34s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/qfbgoi2ou5sq6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3148

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3148/display/redirect?page=changes>

Changes:

[noreply] Mapped JOB_STATE_RESOURCE_CLEANING_UP to State.RUNNING.


------------------------------------------
[...truncated 359.56 KB...]
    Mar 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 17, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 17, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 17, 2022 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 17, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 17, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-z5cV4yKD9a46JgIVKCKz3EGL_c35O7qG5MUA2zKPlik.jar
    Mar 17, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8073529252265047329.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-z_9re910ug4ndvLUiOSWJlu3qiU9Gh-mgvL6wcOMVPI.jar
    Mar 17, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 17, 2022 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 17, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash 8127268ac9db3f21ba54291623e6e5ab6d107b4eb580bd779c241a2568c07a76> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-gScmisnbPyG6VCkWI-blq20Qe061gL13nCQaJWjAenY.pb
    Mar 17, 2022 6:45:29 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 17, 2022 6:45:29 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 17, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 17, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 17, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-16_23_45_30-468579912331484187?project=apache-beam-testing
    Mar 17, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-16_23_45_30-468579912331484187
    Mar 17, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-16_23_45_30-468579912331484187
    Mar 17, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-17T06:45:31.203Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 17, 2022 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:45:40.778Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 17, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:45:41.505Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 17, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:45:41.541Z: Expanding GroupByKey operations into optimizable parts.
    Mar 17, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:45:41.568Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 17, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:45:41.633Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 17, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:45:41.654Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 17, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:45:41.684Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 17, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:45:42.042Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 17, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:45:42.112Z: Starting 5 workers in us-central1-a...
    Mar 17, 2022 6:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:46:08.487Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 17, 2022 6:46:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:46:24.219Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 17, 2022 6:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:46:48.952Z: Workers have started successfully.
    Mar 17, 2022 6:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:46:48.996Z: Workers have started successfully.
    Mar 17, 2022 6:47:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-17T06:47:18.267Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEN1SEpXY0xUMmVoWhoCamQaAmly/streams/CAUaAmpkGgJpciDKzJj6AigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEN1SEpXY0xUMmVoWhoCamQaAmly/streams/CAUaAmpkGgJpciDKzJj6AigC': offset 86897 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEN1SEpXY0xUMmVoWhoCamQaAmly/streams/CAUaAmpkGgJpciDKzJj6AigC': offset 86897 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 17, 2022 6:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-17T06:47:19.311Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEN1SEpXY0xUMmVoWhoCamQaAmly/streams/CAMaAmpkGgJpciCR9-HLBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEN1SEpXY0xUMmVoWhoCamQaAmly/streams/CAMaAmpkGgJpciCR9-HLBCgC': offset 92622 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEN1SEpXY0xUMmVoWhoCamQaAmly/streams/CAMaAmpkGgJpciCR9-HLBCgC': offset 92622 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 17, 2022 6:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:47:20.437Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 17, 2022 6:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:47:20.565Z: Cleaning up.
    Mar 17, 2022 6:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:47:20.634Z: Stopping worker pool...
    Mar 17, 2022 6:49:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:49:45.839Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 17, 2022 6:49:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T06:49:45.884Z: Worker pool stopped.
    Mar 17, 2022 6:49:52 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-16_23_45_30-468579912331484187 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8d06b97d-dc42-468f-bfe0-136e243dfc51 and timestamp: 2022-03-17T06:49:52.491000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.215

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 17, 2022 6:49:52 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 43.31 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 33s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/5373oxc7zeuwo

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3147

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3147/display/redirect?page=changes>

Changes:

[ryanthompson591] fixed typo in typehints

[zyichi] Remove unused prebuild_sdk_container_base_iamge option from validate

[noreply] Merge pull request #16822 from [BEAM-13841][Playground] Add Application

[noreply] Minor: Make serializableCoder warning gramatically correct english

[noreply] [BEAM-14091] Fixing Interactive Beam show/collect for remote runners


------------------------------------------
[...truncated 373.65 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-16_17_59_30-13846312947089996328
    Mar 17, 2022 12:59:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-17T00:59:31.190Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 17, 2022 12:59:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T00:59:43.918Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 17, 2022 12:59:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T00:59:44.918Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 17, 2022 12:59:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T00:59:44.962Z: Expanding GroupByKey operations into optimizable parts.
    Mar 17, 2022 12:59:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T00:59:45.001Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 17, 2022 12:59:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T00:59:45.081Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 17, 2022 12:59:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T00:59:45.113Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 17, 2022 12:59:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T00:59:45.142Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 17, 2022 12:59:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T00:59:45.546Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 17, 2022 12:59:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T00:59:45.632Z: Starting 5 workers in us-central1-b...
    Mar 17, 2022 12:59:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T00:59:51.905Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 17, 2022 1:00:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T01:00:30.598Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 17, 2022 1:00:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T01:00:55.900Z: Workers have started successfully.
    Mar 17, 2022 1:00:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T01:00:55.928Z: Workers have started successfully.
    Mar 17, 2022 1:01:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-17T01:01:26.476Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFJ0bnhhYjVvRlAwNBoCamQaAmly/streams/CAIaAmpkGgJpciDvgLLRBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFJ0bnhhYjVvRlAwNBoCamQaAmly/streams/CAIaAmpkGgJpciDvgLLRBSgC': offset 67212 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFJ0bnhhYjVvRlAwNBoCamQaAmly/streams/CAIaAmpkGgJpciDvgLLRBSgC': offset 67212 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 17, 2022 1:01:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-17T01:01:26.639Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFJ0bnhhYjVvRlAwNBoCamQaAmly/streams/CAkaAmpkGgJpciD32Z7tAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFJ0bnhhYjVvRlAwNBoCamQaAmly/streams/CAkaAmpkGgJpciD32Z7tAigC': offset 123072 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFJ0bnhhYjVvRlAwNBoCamQaAmly/streams/CAkaAmpkGgJpciD32Z7tAigC': offset 123072 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 17, 2022 1:01:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-17T01:01:27.670Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFJ0bnhhYjVvRlAwNBoCamQaAmly/streams/CAQaAmpkGgJpciCEuLbXASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFJ0bnhhYjVvRlAwNBoCamQaAmly/streams/CAQaAmpkGgJpciCEuLbXASgC': offset 108389 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFJ0bnhhYjVvRlAwNBoCamQaAmly/streams/CAQaAmpkGgJpciCEuLbXASgC': offset 108389 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 17, 2022 1:01:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T01:01:31.965Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 17, 2022 1:01:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T01:01:32.232Z: Cleaning up.
    Mar 17, 2022 1:01:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T01:01:32.375Z: Stopping worker pool...
    Mar 17, 2022 1:04:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T01:03:59.750Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 17, 2022 1:04:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-17T01:03:59.835Z: Worker pool stopped.
    Mar 17, 2022 1:04:35 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-16_17_59_30-13846312947089996328 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6de52350-7217-415c-9adf-8c3e70f11e3e and timestamp: 2022-03-17T01:04:35.454000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    11.819

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 17, 2022 1:04:35 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.066 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.08 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 36.22 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 13s
165 actionable tasks: 104 executed, 59 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/fmtamtdgwl6cw

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3146

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3146/display/redirect?page=changes>

Changes:

[hengfeng] feat: add more custom metrics

[noreply] [BEAM-14103][Playgrounf][Bugfix] Fix google analytics id (#17092)

[noreply] Minor: Make ScopedReadStateSupplier final (#16992)

[noreply] [BEAM-14113] Improve SamzaJobServerDriver extensibility (#17099)

[noreply] [BEAM-14116] Chunk commit requests dynamically (#17004)

[noreply] Merge pull request #17079 from [BEAM-13660] Add types and queries in

[noreply] [BEAM-13888] Add unit testing to ioutilx (#17058)


------------------------------------------
[...truncated 359.74 KB...]
    Mar 16, 2022 7:18:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 16, 2022 7:18:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 16, 2022 7:18:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 16, 2022 7:18:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 16, 2022 7:18:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 16, 2022 7:18:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 16, 2022 7:18:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 16, 2022 7:18:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 16, 2022 7:18:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 16, 2022 7:18:51 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 16, 2022 7:18:51 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-z5cV4yKD9a46JgIVKCKz3EGL_c35O7qG5MUA2zKPlik.jar
    Mar 16, 2022 7:18:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8022826166942126307.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-yRrApgpST4VIENEBvkskanVIUugpN7JX_aZC6oETLOA.jar
    Mar 16, 2022 7:18:52 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 16, 2022 7:18:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 16, 2022 7:18:52 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash 78758cc332739fb043716cdb06eac586436e4f75abd6755d53e25e10a7013b8a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-eHWMwzJzn7BDcWzbBurFhkNuT3Wr1nVdU-JeEKcBO4o.pb
    Mar 16, 2022 7:18:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 16, 2022 7:18:56 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 16, 2022 7:18:56 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 16, 2022 7:18:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 16, 2022 7:18:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-16_12_18_56-4435548904190899592?project=apache-beam-testing
    Mar 16, 2022 7:18:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-16_12_18_56-4435548904190899592
    Mar 16, 2022 7:18:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-16_12_18_56-4435548904190899592
    Mar 16, 2022 7:19:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-16T19:18:57.590Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 16, 2022 7:19:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:19:09.656Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 16, 2022 7:19:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:19:10.430Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 16, 2022 7:19:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:19:10.459Z: Expanding GroupByKey operations into optimizable parts.
    Mar 16, 2022 7:19:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:19:10.499Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 16, 2022 7:19:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:19:10.569Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 16, 2022 7:19:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:19:10.597Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 16, 2022 7:19:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:19:10.623Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 16, 2022 7:19:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:19:11.003Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 16, 2022 7:19:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:19:11.081Z: Starting 5 workers in us-central1-b...
    Mar 16, 2022 7:19:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:19:33.073Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 16, 2022 7:19:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:19:56.083Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 16, 2022 7:20:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:20:21.100Z: Workers have started successfully.
    Mar 16, 2022 7:20:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:20:21.155Z: Workers have started successfully.
    Mar 16, 2022 7:21:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-16T19:21:00.174Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEN3MHBsSUl5VlM1WRoCamQaAmly/streams/CAcaAmpkGgJpciDh0f-vBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEN3MHBsSUl5VlM1WRoCamQaAmly/streams/CAcaAmpkGgJpciDh0f-vBSgC': offset 69113 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEN3MHBsSUl5VlM1WRoCamQaAmly/streams/CAcaAmpkGgJpciDh0f-vBSgC': offset 69113 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 16, 2022 7:21:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-16T19:21:00.317Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEN3MHBsSUl5VlM1WRoCamQaAmly/streams/CAMaAmpkGgJpciCtqIblAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEN3MHBsSUl5VlM1WRoCamQaAmly/streams/CAMaAmpkGgJpciCtqIblAygC': offset 64707 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEN3MHBsSUl5VlM1WRoCamQaAmly/streams/CAMaAmpkGgJpciCtqIblAygC': offset 64707 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 16, 2022 7:21:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:21:04.652Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 16, 2022 7:21:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:21:04.765Z: Cleaning up.
    Mar 16, 2022 7:21:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:21:04.841Z: Stopping worker pool...
    Mar 16, 2022 7:23:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:23:26.845Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 16, 2022 7:23:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T19:23:26.890Z: Worker pool stopped.
    Mar 16, 2022 7:23:33 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-16_12_18_56-4435548904190899592 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0ad87737-e15e-4a91-b111-b9024ee72052 and timestamp: 2022-03-16T19:23:34.114000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    12.804

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 16, 2022 7:23:34 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.186 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.18 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 5 mins 1.457 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 58s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ksmvijflwdtbk

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3145

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3145/display/redirect>

Changes:


------------------------------------------
[...truncated 357.91 KB...]
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 16, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 16, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 16, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 16, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 16, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 16, 2022 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 16, 2022 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 16, 2022 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 16, 2022 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6427551306784256592.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-QEBlUZ2qsGrybwnbeYq4wj3lPOPrhLyFbeQlIsH5eqA.jar
    Mar 16, 2022 12:45:09 PM org.apache.beam.sdk.metrics.MetricsEnvironment getCurrentContainer
    WARNING: Reporting metrics are not supported in the current execution environment.
    Mar 16, 2022 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 7 seconds
    Mar 16, 2022 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 16, 2022 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash ba1501481d7c0fccfab93d07a782d1f5926f50dc9f27a7f47b2116c887f9cf59> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-uhUBSB18D8z6uT0Hp4LR9ZJvUNyfJ6f0eyEWyIf5z1k.pb
    Mar 16, 2022 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 16, 2022 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 16, 2022 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 16, 2022 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 16, 2022 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-16_05_45_13-6594689197985557668?project=apache-beam-testing
    Mar 16, 2022 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-16_05_45_13-6594689197985557668
    Mar 16, 2022 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-16_05_45_13-6594689197985557668
    Mar 16, 2022 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-16T12:45:15.074Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 16, 2022 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:45:31.026Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 16, 2022 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:45:31.594Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 16, 2022 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:45:31.625Z: Expanding GroupByKey operations into optimizable parts.
    Mar 16, 2022 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:45:31.655Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 16, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:45:31.732Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 16, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:45:31.756Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 16, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:45:31.805Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 16, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:45:32.323Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 16, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:45:32.404Z: Starting 5 workers in us-central1-f...
    Mar 16, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:45:35.607Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 16, 2022 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:46:03.528Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Mar 16, 2022 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:46:03.581Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Mar 16, 2022 12:46:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:46:13.872Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 16, 2022 12:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:46:37.384Z: Workers have started successfully.
    Mar 16, 2022 12:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:46:37.418Z: Workers have started successfully.
    Mar 16, 2022 12:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-16T12:47:10.578Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG4xanVNMlZDbXVtaxoCamQaAmly/streams/CAcaAmpkGgJpciDB6qOnASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG4xanVNMlZDbXVtaxoCamQaAmly/streams/CAcaAmpkGgJpciDB6qOnASgC': offset 94279 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG4xanVNMlZDbXVtaxoCamQaAmly/streams/CAcaAmpkGgJpciDB6qOnASgC': offset 94279 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 16, 2022 12:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-16T12:47:12.579Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG4xanVNMlZDbXVtaxoCamQaAmly/streams/CAYaAmpkGgJpciCbw6q7ASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG4xanVNMlZDbXVtaxoCamQaAmly/streams/CAYaAmpkGgJpciCbw6q7ASgC': offset 90702 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG4xanVNMlZDbXVtaxoCamQaAmly/streams/CAYaAmpkGgJpciCbw6q7ASgC': offset 90702 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 16, 2022 12:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:47:14.734Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 16, 2022 12:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:47:14.902Z: Cleaning up.
    Mar 16, 2022 12:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:47:15.009Z: Stopping worker pool...
    Mar 16, 2022 12:49:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:49:37.506Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 16, 2022 12:49:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T12:49:37.543Z: Worker pool stopped.
    Mar 16, 2022 12:49:42 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-16_05_45_13-6594689197985557668 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6bf5a5cc-9880-4710-962a-7696647921cc and timestamp: 2022-03-16T12:49:42.879000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.466

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 16, 2022 12:49:42 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 55.661 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 21s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/yptpq42yes46k

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3144

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3144/display/redirect?page=changes>

Changes:

[Chamikara Madhusanka Jayalath] Updates x-lang release validation to use staged jars


------------------------------------------
[...truncated 357.75 KB...]
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 16, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 16, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 16, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 16, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 16, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 16, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 16, 2022 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 16, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 16, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 16, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3916551688809416626.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Ek_ewJoZCdmZnLcEZo1_bCjPHxflEipFYoima182dnE.jar
    Mar 16, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 16, 2022 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 16, 2022 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145672 bytes, hash 2241e80d693783d78bb5585dd89c987f439811870f56900e19fde7a89f1b89e2> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-IkHoDWk3g9eLtVhd2JyYf0OYEYcPVpAOGf3nqJ8bieI.pb
    Mar 16, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 16, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 16, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 16, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 16, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-15_23_45_08-3835301170738303118?project=apache-beam-testing
    Mar 16, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-15_23_45_08-3835301170738303118
    Mar 16, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-15_23_45_08-3835301170738303118
    Mar 16, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-16T06:45:09.545Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 16, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:45:19.497Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 16, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:45:20.631Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 16, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:45:20.669Z: Expanding GroupByKey operations into optimizable parts.
    Mar 16, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:45:20.698Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 16, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:45:20.775Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 16, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:45:20.808Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 16, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:45:20.834Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 16, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:45:21.223Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 16, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:45:21.298Z: Starting 5 workers in us-central1-f...
    Mar 16, 2022 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:45:46.904Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 16, 2022 6:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:45:54.279Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Mar 16, 2022 6:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:45:54.318Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Mar 16, 2022 6:46:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:46:04.623Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 16, 2022 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:46:28.609Z: Workers have started successfully.
    Mar 16, 2022 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:46:28.636Z: Workers have started successfully.
    Mar 16, 2022 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-16T06:47:01.412Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDFKQWJuekVIb3RXZxoCamQaAmly/streams/CAEaAmpkGgJpciCh-eqlASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDFKQWJuekVIb3RXZxoCamQaAmly/streams/CAEaAmpkGgJpciCh-eqlASgC': offset 89220 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDFKQWJuekVIb3RXZxoCamQaAmly/streams/CAEaAmpkGgJpciCh-eqlASgC': offset 89220 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 16, 2022 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-16T06:47:02.404Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDFKQWJuekVIb3RXZxoCamQaAmly/streams/CAMaAmpkGgJpciCp1ZoGKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDFKQWJuekVIb3RXZxoCamQaAmly/streams/CAMaAmpkGgJpciCp1ZoGKAI': offset 89654 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDFKQWJuekVIb3RXZxoCamQaAmly/streams/CAMaAmpkGgJpciCp1ZoGKAI': offset 89654 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 16, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:47:04.879Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 16, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:47:05.092Z: Cleaning up.
    Mar 16, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:47:05.183Z: Stopping worker pool...
    Mar 16, 2022 6:49:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:49:34.611Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 16, 2022 6:49:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T06:49:34.671Z: Worker pool stopped.
    Mar 16, 2022 6:49:39 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-15_23_45_08-3835301170738303118 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9ab83a97-d9dd-4f55-8188-797741555801 and timestamp: 2022-03-16T06:49:39.913000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.179

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 16, 2022 6:49:40 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 53.265 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 20s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/eobjuh7b43ito

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3143

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3143/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #17052 from [BEAM-13818] [SnowflakeIO] Add support

[noreply] Adding pydoc for StateHandler (#17091)

[noreply] BEAM-3165 Bypass split if numSplit is zero (#17084)


------------------------------------------
[...truncated 363.79 KB...]
    Mar 16, 2022 12:48:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 16, 2022 12:48:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 16, 2022 12:48:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 16, 2022 12:48:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 16, 2022 12:48:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 16, 2022 12:48:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 16, 2022 12:48:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 16, 2022 12:48:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 16, 2022 12:48:16 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 16, 2022 12:48:16 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 16, 2022 12:48:16 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test757500518255828627.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-DTaStJsBRtb1HGXRh9id5ls4aWCdDIKTYfMzkQYkiGg.jar
    Mar 16, 2022 12:48:17 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.38.0-SNAPSHOT-qG5Q_aPDzsCHfDatVATSeNycu_a8Ct27dHbs_h42d_o.jar
    Mar 16, 2022 12:48:20 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 370 files cached, 2 files newly uploaded in 4 seconds
    Mar 16, 2022 12:48:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 16, 2022 12:48:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash f4fa56da0edfb44bd249f580e24b750b5095f2533ee38a817483abe2485b995f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-9PpW2g7ftEvSSfWA4kt1C1CV8lM-44qBdIOr4khbmV8.pb
    Mar 16, 2022 12:48:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 16, 2022 12:48:27 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 16, 2022 12:48:27 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 16, 2022 12:48:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 16, 2022 12:48:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-15_17_48_27-11610495952681301651?project=apache-beam-testing
    Mar 16, 2022 12:48:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-15_17_48_27-11610495952681301651
    Mar 16, 2022 12:48:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-15_17_48_27-11610495952681301651
    Mar 16, 2022 12:48:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-16T00:48:28.910Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 16, 2022 12:48:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:48:37.136Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 16, 2022 12:48:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:48:37.802Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 16, 2022 12:48:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:48:37.866Z: Expanding GroupByKey operations into optimizable parts.
    Mar 16, 2022 12:48:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:48:37.902Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 16, 2022 12:48:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:48:37.969Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 16, 2022 12:48:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:48:38.000Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 16, 2022 12:48:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:48:38.037Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 16, 2022 12:48:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:48:38.396Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 16, 2022 12:48:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:48:38.471Z: Starting 5 workers in us-central1-b...
    Mar 16, 2022 12:49:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:49:02.816Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 16, 2022 12:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:49:22.820Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 16, 2022 12:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:49:48.784Z: Workers have started successfully.
    Mar 16, 2022 12:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:49:48.812Z: Workers have started successfully.
    Mar 16, 2022 12:50:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-16T00:50:32.489Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFF1UTRYRVQzSmhBdRoCamQaAmly/streams/CAkaAmpkGgJpciC1gvGfBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFF1UTRYRVQzSmhBdRoCamQaAmly/streams/CAkaAmpkGgJpciC1gvGfBygC': offset 105753 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFF1UTRYRVQzSmhBdRoCamQaAmly/streams/CAkaAmpkGgJpciC1gvGfBygC': offset 105753 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 16, 2022 12:50:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-16T00:50:32.623Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFF1UTRYRVQzSmhBdRoCamQaAmly/streams/CAgaAmpkGgJpciDNrbTJAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFF1UTRYRVQzSmhBdRoCamQaAmly/streams/CAgaAmpkGgJpciDNrbTJAigC': offset 72634 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFF1UTRYRVQzSmhBdRoCamQaAmly/streams/CAgaAmpkGgJpciDNrbTJAigC': offset 72634 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 16, 2022 12:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:50:34.548Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 16, 2022 12:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:50:34.692Z: Cleaning up.
    Mar 16, 2022 12:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:50:34.765Z: Stopping worker pool...
    Mar 16, 2022 12:52:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:52:50.676Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 16, 2022 12:52:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-16T00:52:50.721Z: Worker pool stopped.
    Mar 16, 2022 12:52:57 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-15_17_48_27-11610495952681301651 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 19aa6b9a-0738-4626-aace-bb54fe739a0d and timestamp: 2022-03-16T00:52:57.697000000Z:
                     Metric:                    Value:
                   read_time                      9.79
                 fields_read                 4632376.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 16, 2022 12:52:57 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 5 mins 16.849 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 52s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/eydscddhmfjeu

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3142

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3142/display/redirect?page=changes>

Changes:

[dhuntsperger] documented maven-to-gradle conversion for Dataflow; refactored java

[dhuntsperger] adding a list of example pipelines

[dhuntsperger] removing unnecessary `ls` command from maven project generation

[dhuntsperger] fixing filename formatting in response to feedback

[dhuntsperger] adding extra step emphasizing runner setupt

[dhuntsperger] reorganized instructions to emphasize setup steps for runners

[noreply] [BEAM-13767] Move a bunch of python tasks to use gradle configuration…


------------------------------------------
[...truncated 349.20 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is b2cb61a2b9885480e03207b5d9b161df
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 15, 2022 7:01:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 15, 2022 7:01:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 15, 2022 7:01:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 371 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 15, 2022 7:01:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 15, 2022 7:01:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 7:01:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 15, 2022 7:01:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 15, 2022 7:01:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 7:01:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 15, 2022 7:01:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1293369737]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 15, 2022 7:01:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 15, 2022 7:01:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 7:01:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 15, 2022 7:01:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 15, 2022 7:01:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 7:01:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 15, 2022 7:01:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1651372403]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 15, 2022 7:01:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 15, 2022 7:01:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 7:01:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 15, 2022 7:01:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 15, 2022 7:01:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 7:01:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 15, 2022 7:01:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 15, 2022 7:01:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 15, 2022 7:01:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 15, 2022 7:02:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 15, 2022 7:02:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 15, 2022 7:02:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test967286019172503378.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-WoRU7F3LhVuqYjIglc25iCc3H0Y-r4EW2xvwu7gD6xc.jar
    Mar 15, 2022 7:02:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 2 seconds
    Mar 15, 2022 7:02:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 15, 2022 7:02:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash a1b05ef21f0e82114fca2e6e42e6b37e7fa8b46ca53560cc70d4c04aaae12601> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-obBe8h8OghFPyi5uQuazfn-otGylNWDMcNTASqrhJgE.pb
    Mar 15, 2022 7:02:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 15, 2022 7:02:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 15, 2022 7:02:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 15, 2022 7:02:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 15, 2022 7:02:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-15_12_02_09-3352815391314379594?project=apache-beam-testing
    Mar 15, 2022 7:02:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-15_12_02_09-3352815391314379594
    Mar 15, 2022 7:02:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-15_12_02_09-3352815391314379594
    Mar 15, 2022 7:02:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-15T19:02:11.848Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 15, 2022 7:02:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:02:24.643Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 15, 2022 7:02:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:02:25.350Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 15, 2022 7:02:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:02:25.382Z: Expanding GroupByKey operations into optimizable parts.
    Mar 15, 2022 7:02:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:02:25.410Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 15, 2022 7:02:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:02:25.486Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 15, 2022 7:02:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:02:25.519Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 15, 2022 7:02:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:02:25.547Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 15, 2022 7:02:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:02:25.899Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 15, 2022 7:02:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:02:25.973Z: Starting 5 workers in us-central1-a...
    Mar 15, 2022 7:02:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:02:49.425Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 15, 2022 7:03:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:03:08.102Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Mar 15, 2022 7:03:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:03:08.164Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Mar 15, 2022 7:03:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:03:18.519Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 15, 2022 7:03:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:03:41.332Z: Workers have started successfully.
    Mar 15, 2022 7:04:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:04:15.272Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 15, 2022 7:04:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:04:15.476Z: Cleaning up.
    Mar 15, 2022 7:04:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:04:15.575Z: Stopping worker pool...
    Mar 15, 2022 7:07:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:07:02.755Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 15, 2022 7:07:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T19:07:02.802Z: Worker pool stopped.
    Mar 15, 2022 7:07:09 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-15_12_02_09-3352815391314379594 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a0abfc8f-93cb-4b0e-811f-4844592886e1 and timestamp: 2022-03-15T19:07:09.403000000Z:
                     Metric:                    Value:
                   read_time                    11.169
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 15, 2022 7:07:09 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 24.78 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 30s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ldydhds67vhi4

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3141

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3141/display/redirect?page=changes>

Changes:

[mmack] [BEAM-13175] Add KinesisIO.write for AWS SDK v2.

[mmack] [BEAM-13175] Document changes to IOs in amazon-web-services2 for AWS


------------------------------------------
[...truncated 355.04 KB...]
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 816cc43da211053dd64ef16b9cacc489
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 15, 2022 1:04:04 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 15, 2022 1:04:04 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 15, 2022 1:04:05 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 371 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 15, 2022 1:04:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 15, 2022 1:04:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 1:04:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 15, 2022 1:04:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 15, 2022 1:04:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 1:04:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 15, 2022 1:04:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1293369737]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 15, 2022 1:04:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 15, 2022 1:04:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 1:04:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 15, 2022 1:04:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 15, 2022 1:04:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 1:04:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 15, 2022 1:04:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@15076442]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 15, 2022 1:04:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 15, 2022 1:04:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 1:04:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 15, 2022 1:04:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 15, 2022 1:04:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 1:04:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 15, 2022 1:04:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 15, 2022 1:04:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 15, 2022 1:04:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 15, 2022 1:04:16 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 15, 2022 1:04:17 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 15, 2022 1:04:17 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7960289482047535288.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Uh1axxD13LydRiXmXC_82_CppPw3DLHKovr6dZ9Y4DM.jar
    Mar 15, 2022 1:04:21 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 4 seconds
    Mar 15, 2022 1:04:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 15, 2022 1:04:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145672 bytes, hash 4a664847865ce15660b0abefa8f7ac7b91006f9b38de5abf44234beeab59e066> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-SmZIR4Zc4VZgsKvvqPese5EAb5s43lq_RCNL7qtZ4GY.pb
    Mar 15, 2022 1:04:24 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 15, 2022 1:04:25 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 15, 2022 1:04:25 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 15, 2022 1:04:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 15, 2022 1:04:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-15_06_04_25-9326979274805547998?project=apache-beam-testing
    Mar 15, 2022 1:04:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-15_06_04_25-9326979274805547998
    Mar 15, 2022 1:04:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-15_06_04_25-9326979274805547998
    Mar 15, 2022 1:04:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-15T13:04:27.411Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 15, 2022 1:04:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:04:37.788Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 15, 2022 1:04:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:04:38.619Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 15, 2022 1:04:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:04:38.677Z: Expanding GroupByKey operations into optimizable parts.
    Mar 15, 2022 1:04:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:04:38.715Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 15, 2022 1:04:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:04:38.826Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 15, 2022 1:04:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:04:38.916Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 15, 2022 1:04:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:04:38.952Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 15, 2022 1:04:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:04:39.399Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 15, 2022 1:04:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:04:39.504Z: Starting 5 workers in us-central1-a...
    Mar 15, 2022 1:05:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:05:02.561Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 15, 2022 1:05:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:05:32.016Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 15, 2022 1:05:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:05:55.517Z: Workers have started successfully.
    Mar 15, 2022 1:06:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:06:26.842Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 15, 2022 1:06:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:06:27.042Z: Cleaning up.
    Mar 15, 2022 1:06:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:06:27.137Z: Stopping worker pool...
    Mar 15, 2022 1:08:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:08:55.642Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 15, 2022 1:08:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T13:08:55.901Z: Worker pool stopped.
    Mar 15, 2022 1:09:03 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-15_06_04_25-9326979274805547998 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6bf1d689-7a97-485a-880c-4e63f13272e4 and timestamp: 2022-03-15T13:09:03.542000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.704

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 15, 2022 1:09:03 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 5 mins 3.183 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 1s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/aptnk52myuzve

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3140

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3140/display/redirect>

Changes:


------------------------------------------
[...truncated 352.74 KB...]
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 15, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 15, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 15, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 15, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 15, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1651372403]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 15, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 15, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 15, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 15, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 15, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 15, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 15, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 15, 2022 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 15, 2022 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 15, 2022 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 15, 2022 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3273605755870617527.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-e9o71yNPgjEEXn1tqes5BTZ_os_vMjMBLGoBhWC5VOA.jar
    Mar 15, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 1 seconds
    Mar 15, 2022 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 15, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash d0b33f627c9a107f05094d0577386ea3bf481811ae292ff67ad44345b7ad8e18> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-0LM_YnyaEH8FCU0Fdzhuo79IGBGuKS_2etRDRbetjhg.pb
    Mar 15, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 15, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 15, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 15, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 15, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-14_23_45_07-14934304043033425971?project=apache-beam-testing
    Mar 15, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-14_23_45_07-14934304043033425971
    Mar 15, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-14_23_45_07-14934304043033425971
    Mar 15, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-15T06:45:08.831Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 15, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:45:20.286Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 15, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:45:20.933Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 15, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:45:21.003Z: Expanding GroupByKey operations into optimizable parts.
    Mar 15, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:45:21.037Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 15, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:45:21.155Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 15, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:45:21.189Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 15, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:45:21.233Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 15, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:45:21.961Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 15, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:45:22.081Z: Starting 5 workers in us-central1-f...
    Mar 15, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:45:42.871Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 15, 2022 6:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:46:07.037Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 15, 2022 6:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:46:34.503Z: Workers have started successfully.
    Mar 15, 2022 6:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:46:34.566Z: Workers have started successfully.
    Mar 15, 2022 6:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-15T06:47:09.846Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHhTQk5Xd0g4Wkh1eBoCamQaAmly/streams/CAYaAmpkGgJpciCT1rWRBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHhTQk5Xd0g4Wkh1eBoCamQaAmly/streams/CAYaAmpkGgJpciCT1rWRBCgC': offset 79224 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHhTQk5Xd0g4Wkh1eBoCamQaAmly/streams/CAYaAmpkGgJpciCT1rWRBCgC': offset 79224 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 15, 2022 6:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:47:13.070Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 15, 2022 6:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:47:13.360Z: Cleaning up.
    Mar 15, 2022 6:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:47:13.569Z: Stopping worker pool...
    Mar 15, 2022 6:49:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:49:43.919Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 15, 2022 6:49:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T06:49:44.088Z: Worker pool stopped.
    Mar 15, 2022 6:49:50 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-14_23_45_07-14934304043033425971 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cc33c318-221a-4ff3-b246-31bf51a89684 and timestamp: 2022-03-15T06:49:50.497000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.719

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 15, 2022 6:49:50 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 5 mins 3.023 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ddhwlhx5horsu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3139

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3139/display/redirect?page=changes>

Changes:

[benjamin.gonzalez] [BEAM-12572] Add DistinctExample test

[benjamin.gonzalez] [BEAM-12572] Add DistinctExample test

[benjamin.gonzalez] [BEAM-12572] Add java examples tests

[benjamin.gonzalez] [BEAM-12572] Change tests collected by runner

[benjamin.gonzalez] [BEAM-12572] Fix PipelineOptions, skip tests for specific runners

[benjamin.gonzalez] [BEAM-12572] Skip failing tests

[benjamin.gonzalez] [BEAM-12572] Fix checkstyle warnings

[benjamin.gonzalez] [BEAM-12572] Skip tfidf tests on direct runner

[benjamin.gonzalez] [BEAM-12572] Fix PipelineOptions

[benjamin.gonzalez] [BEAM-12572] Fix checkstyle warning

[benjamin.gonzalez] [BEAM-12572] Enable test with Pipeline serialization error fixed, link

[benjamin.gonzalez] [BEAM-12572] Sickbay TfIdf test for DirectRunner

[benjamin.gonzalez] [BEAM-12572] Refactor naming for BigQueryClient

[noreply] Add deterministic dict coding via key sorting. (#17037)

[noreply] Merge pull request #17054 from [BEAM-14075] [SnowflakeIO] Change a


------------------------------------------
[...truncated 363.68 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-14_17_45_12-13182112254782742541
    Mar 15, 2022 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-15T00:45:15.979Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 15, 2022 12:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:45:30.149Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 15, 2022 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:45:30.994Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 15, 2022 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:45:31.055Z: Expanding GroupByKey operations into optimizable parts.
    Mar 15, 2022 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:45:31.090Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 15, 2022 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:45:31.165Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 15, 2022 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:45:31.224Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 15, 2022 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:45:31.258Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 15, 2022 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:45:31.572Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 15, 2022 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:45:31.641Z: Starting 5 workers in us-central1-b...
    Mar 15, 2022 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:45:57.115Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 15, 2022 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:46:24.031Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 15, 2022 12:46:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:46:49.635Z: Workers have started successfully.
    Mar 15, 2022 12:46:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:46:49.673Z: Workers have started successfully.
    Mar 15, 2022 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-15T00:47:18.457Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHFkZTdpX0FkQzNhNhoCamQaAmly/streams/CAQaAmpkGgJpciD51_7rAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHFkZTdpX0FkQzNhNhoCamQaAmly/streams/CAQaAmpkGgJpciD51_7rAygC': offset 75895 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHFkZTdpX0FkQzNhNhoCamQaAmly/streams/CAQaAmpkGgJpciD51_7rAygC': offset 75895 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 15, 2022 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-15T00:47:19.672Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHFkZTdpX0FkQzNhNhoCamQaAmly/streams/CAUaAmpkGgJpciCHsN3KBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHFkZTdpX0FkQzNhNhoCamQaAmly/streams/CAUaAmpkGgJpciCHsN3KBigC': offset 65213 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHFkZTdpX0FkQzNhNhoCamQaAmly/streams/CAUaAmpkGgJpciCHsN3KBigC': offset 65213 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 15, 2022 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-15T00:47:20.514Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHFkZTdpX0FkQzNhNhoCamQaAmly/streams/CAkaAmpkGgJpciC3grnzAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHFkZTdpX0FkQzNhNhoCamQaAmly/streams/CAkaAmpkGgJpciC3grnzAigC': offset 73977 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHFkZTdpX0FkQzNhNhoCamQaAmly/streams/CAkaAmpkGgJpciC3grnzAigC': offset 73977 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 15, 2022 12:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:47:24.558Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 15, 2022 12:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:47:24.709Z: Cleaning up.
    Mar 15, 2022 12:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:47:24.790Z: Stopping worker pool...
    Mar 15, 2022 12:50:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:50:00.136Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 15, 2022 12:50:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-15T00:50:00.221Z: Worker pool stopped.
    Mar 15, 2022 12:51:12 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-14_17_45_12-13182112254782742541 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 738a9519-1bfe-4df4-836d-8e75cd9f1985 and timestamp: 2022-03-15T00:51:12.844000000Z:
                     Metric:                    Value:
                   read_time                    11.664
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 15, 2022 12:51:12 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 44 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.001 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.002 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 6 mins 20.444 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 51s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/sknk7dmwb5fl4

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3138

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3138/display/redirect?page=changes>

Changes:

[mmack] [adhoc] Update owners for aws modules

[noreply] [BEAM-14051] Add a warning to gradle.properties that many Beam modules

[noreply] Merge pull request #17000 from [BEAM-13881] [Playground] Increase test


------------------------------------------
[...truncated 349.08 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 14, 2022 6:51:06 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 14, 2022 6:51:08 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 14, 2022 6:51:08 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 371 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 14, 2022 6:51:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 14, 2022 6:51:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 14, 2022 6:51:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 14, 2022 6:51:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 14, 2022 6:51:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 14, 2022 6:51:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 14, 2022 6:51:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1293369737]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 14, 2022 6:51:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 14, 2022 6:51:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 14, 2022 6:51:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 14, 2022 6:51:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 14, 2022 6:51:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 14, 2022 6:51:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 14, 2022 6:51:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@15076442]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 14, 2022 6:51:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 14, 2022 6:51:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 14, 2022 6:51:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 14, 2022 6:51:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 14, 2022 6:51:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 14, 2022 6:51:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 14, 2022 6:51:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 14, 2022 6:51:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 14, 2022 6:51:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 14, 2022 6:51:20 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 14, 2022 6:51:20 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 14, 2022 6:51:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4155918696637871207.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-88ULBAkGf9aP9TYGLUcZwI2pIvYdoUfden3d0ZBum8I.jar
    Mar 14, 2022 6:51:21 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 14, 2022 6:51:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 14, 2022 6:51:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145673 bytes, hash 5ce9cf9bbd876c2df7c09f1e70b9a1ac008470a58d17c35c7c050d823b329b05> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-XOnPm72HbC33wJ8ecLmhrACEcKWNF8NcfAUNgjsymwU.pb
    Mar 14, 2022 6:51:26 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 14, 2022 6:51:26 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 14, 2022 6:51:26 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 14, 2022 6:51:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 14, 2022 6:51:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-14_11_51_26-12352748356936362727?project=apache-beam-testing
    Mar 14, 2022 6:51:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-14_11_51_26-12352748356936362727
    Mar 14, 2022 6:51:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-14_11_51_26-12352748356936362727
    Mar 14, 2022 6:51:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-14T18:51:27.941Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 14, 2022 6:51:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:51:35.034Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 14, 2022 6:51:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:51:35.722Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 14, 2022 6:51:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:51:35.792Z: Expanding GroupByKey operations into optimizable parts.
    Mar 14, 2022 6:51:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:51:35.822Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 14, 2022 6:51:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:51:35.933Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 14, 2022 6:51:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:51:35.981Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 14, 2022 6:51:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:51:36.022Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 14, 2022 6:51:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:51:36.513Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 14, 2022 6:51:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:51:36.667Z: Starting 5 workers in us-central1-b...
    Mar 14, 2022 6:51:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:51:48.038Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 14, 2022 6:52:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:52:26.041Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Mar 14, 2022 6:52:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:52:26.072Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Mar 14, 2022 6:52:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:52:36.413Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 14, 2022 6:52:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:52:50.852Z: Workers have started successfully.
    Mar 14, 2022 6:52:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:52:50.881Z: Workers have started successfully.
    Mar 14, 2022 6:53:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:53:23.888Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 14, 2022 6:53:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:53:24.117Z: Cleaning up.
    Mar 14, 2022 6:53:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:53:24.264Z: Stopping worker pool...
    Mar 14, 2022 6:55:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:55:51.535Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 14, 2022 6:55:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T18:55:51.573Z: Worker pool stopped.
    Mar 14, 2022 6:55:59 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-14_11_51_26-12352748356936362727 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e60e6e8e-42c8-4e72-8998-60c26d2fb0da and timestamp: 2022-03-14T18:55:59.323000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.306

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 14, 2022 6:55:59 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 56.752 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 3s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/g2kowcggjzjow

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3137

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3137/display/redirect>

Changes:


------------------------------------------
[...truncated 361.78 KB...]
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-14_05_45_09-14342096232824781418
    Mar 14, 2022 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-14T12:45:10.077Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 14, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:45:19.168Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 14, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:45:19.992Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 14, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:45:20.033Z: Expanding GroupByKey operations into optimizable parts.
    Mar 14, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:45:20.059Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 14, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:45:20.143Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 14, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:45:20.179Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 14, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:45:20.227Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 14, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:45:20.555Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 14, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:45:20.640Z: Starting 5 workers in us-central1-b...
    Mar 14, 2022 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:45:33.267Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 14, 2022 12:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:46:05.963Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 14, 2022 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:46:36.107Z: Workers have started successfully.
    Mar 14, 2022 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:46:36.171Z: Workers have started successfully.
    Mar 14, 2022 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-14T12:47:04.566Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEZWQ2I3ZEw0Y2U2NxoCamQaAmly/streams/CAgaAmpkGgJpciDvvtO4BSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEZWQ2I3ZEw0Y2U2NxoCamQaAmly/streams/CAgaAmpkGgJpciDvvtO4BSgC': offset 81637 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEZWQ2I3ZEw0Y2U2NxoCamQaAmly/streams/CAgaAmpkGgJpciDvvtO4BSgC': offset 81637 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 14, 2022 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-14T12:47:04.981Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEZWQ2I3ZEw0Y2U2NxoCamQaAmly/streams/CAcaAmpkGgJpciDH1djSAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEZWQ2I3ZEw0Y2U2NxoCamQaAmly/streams/CAcaAmpkGgJpciDH1djSAygC': offset 119321 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEZWQ2I3ZEw0Y2U2NxoCamQaAmly/streams/CAcaAmpkGgJpciDH1djSAygC': offset 119321 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 14, 2022 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-14T12:47:05.572Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEZWQ2I3ZEw0Y2U2NxoCamQaAmly/streams/CAYaAmpkGgJpciD6sdHnBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEZWQ2I3ZEw0Y2U2NxoCamQaAmly/streams/CAYaAmpkGgJpciD6sdHnBSgC': offset 73420 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEZWQ2I3ZEw0Y2U2NxoCamQaAmly/streams/CAYaAmpkGgJpciD6sdHnBSgC': offset 73420 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 14, 2022 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:47:07.937Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 14, 2022 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:47:08.079Z: Cleaning up.
    Mar 14, 2022 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:47:08.181Z: Stopping worker pool...
    Mar 14, 2022 12:49:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:49:40.542Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 14, 2022 12:49:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T12:49:40.593Z: Worker pool stopped.
    Mar 14, 2022 12:49:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-14_05_45_09-14342096232824781418 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4b56c5d3-bd46-4e1b-9a51-9eef9218a40e and timestamp: 2022-03-14T12:49:47.949000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.958

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 14, 2022 12:49:48 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 2.061 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/e2l3xbxptiljo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3136

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3136/display/redirect>

Changes:


------------------------------------------
[...truncated 356.49 KB...]
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 14, 2022 6:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 14, 2022 6:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 14, 2022 6:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 14, 2022 6:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 14, 2022 6:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 14, 2022 6:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 14, 2022 6:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 14, 2022 6:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 14, 2022 6:44:43 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 14, 2022 6:44:47 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 14, 2022 6:44:47 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 14, 2022 6:44:47 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2917121847895919072.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-RPPF_tjdUO5AEEFJ_K93fJmXxGTFiIJvxPQyigfmym0.jar
    Mar 14, 2022 6:44:48 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 1 seconds
    Mar 14, 2022 6:44:48 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 14, 2022 6:44:48 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash 26b054fe30092cae90d86dcad40a9c994b5e2db4bc406d6ac81e554dcb5f21c1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-JrBU_jAJLK6Q2G3K1AqcmUteLbS8QG1qyB5VTctfIcE.pb
    Mar 14, 2022 6:44:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 14, 2022 6:44:52 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 14, 2022 6:44:52 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 14, 2022 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 14, 2022 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-13_23_44_52-17134992755797174043?project=apache-beam-testing
    Mar 14, 2022 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-13_23_44_52-17134992755797174043
    Mar 14, 2022 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-13_23_44_52-17134992755797174043
    Mar 14, 2022 6:44:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-14T06:44:53.642Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 14, 2022 6:45:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:45:01.306Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 14, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:45:01.938Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 14, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:45:01.967Z: Expanding GroupByKey operations into optimizable parts.
    Mar 14, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:45:01.997Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 14, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:45:02.068Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 14, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:45:02.098Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 14, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:45:02.121Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 14, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:45:02.480Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 14, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:45:02.545Z: Starting 5 workers in us-central1-b...
    Mar 14, 2022 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:45:15.441Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 14, 2022 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:45:43.616Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 14, 2022 6:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:46:08.400Z: Workers have started successfully.
    Mar 14, 2022 6:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:46:08.435Z: Workers have started successfully.
    Mar 14, 2022 6:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-14T06:46:45.745Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHQ4dEtKQzZWck01OBoCamQaAmly/streams/CAgaAmpkGgJpciCiu4j5BSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHQ4dEtKQzZWck01OBoCamQaAmly/streams/CAgaAmpkGgJpciCiu4j5BSgC': offset 95988 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHQ4dEtKQzZWck01OBoCamQaAmly/streams/CAgaAmpkGgJpciCiu4j5BSgC': offset 95988 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 14, 2022 6:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-14T06:46:45.747Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHQ4dEtKQzZWck01OBoCamQaAmly/streams/GgJqZBoCaXIg9JeIpAYoAg"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHQ4dEtKQzZWck01OBoCamQaAmly/streams/GgJqZBoCaXIg9JeIpAYoAg': offset 90339 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHQ4dEtKQzZWck01OBoCamQaAmly/streams/GgJqZBoCaXIg9JeIpAYoAg': offset 90339 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 14, 2022 6:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:46:47.721Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 14, 2022 6:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:46:47.879Z: Cleaning up.
    Mar 14, 2022 6:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:46:47.957Z: Stopping worker pool...
    Mar 14, 2022 6:49:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:49:16.320Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 14, 2022 6:49:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T06:49:16.402Z: Worker pool stopped.
    Mar 14, 2022 6:49:23 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-13_23_44_52-17134992755797174043 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0b8f6e7c-4328-4e12-a1c4-8bbed149f8d3 and timestamp: 2022-03-14T06:49:23.440000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.933

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 14, 2022 6:49:23 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 15 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.001 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.003 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 50.521 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 2s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/uhgksqd5rnqze

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3135

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3135/display/redirect>

Changes:


------------------------------------------
[...truncated 397.78 KB...]
    Mar 14, 2022 12:45:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:45:08.164Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 14, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:45:08.798Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 14, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:45:08.827Z: Expanding GroupByKey operations into optimizable parts.
    Mar 14, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:45:08.863Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 14, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:45:08.936Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 14, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:45:08.973Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 14, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:45:09.006Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 14, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:45:09.354Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 14, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:45:09.464Z: Starting 5 workers in us-central1-b...
    Mar 14, 2022 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:45:12.482Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 14, 2022 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:45:48.422Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Mar 14, 2022 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:45:48.454Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Mar 14, 2022 12:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:45:58.710Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 14, 2022 12:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:46:22.997Z: Workers have started successfully.
    Mar 14, 2022 12:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:46:23.031Z: Workers have started successfully.
    Mar 14, 2022 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-14T00:46:51.870Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHczeHhVMTRWa1NTZxoCamQaAmly/streams/CAEaAmpkGgJpciCcmIzpBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHczeHhVMTRWa1NTZxoCamQaAmly/streams/CAEaAmpkGgJpciCcmIzpBygC': offset 64811 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHczeHhVMTRWa1NTZxoCamQaAmly/streams/CAEaAmpkGgJpciCcmIzpBygC': offset 64811 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 14, 2022 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-14T00:46:52.048Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHczeHhVMTRWa1NTZxoCamQaAmly/streams/CAgaAmpkGgJpciDmye6_AigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHczeHhVMTRWa1NTZxoCamQaAmly/streams/CAgaAmpkGgJpciDmye6_AigC': offset 92878 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHczeHhVMTRWa1NTZxoCamQaAmly/streams/CAgaAmpkGgJpciDmye6_AigC': offset 92878 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 14, 2022 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-14T00:46:52.878Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHczeHhVMTRWa1NTZxoCamQaAmly/streams/CAMaAmpkGgJpciDMyeXMAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHczeHhVMTRWa1NTZxoCamQaAmly/streams/CAMaAmpkGgJpciDMyeXMAigC': offset 66365 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHczeHhVMTRWa1NTZxoCamQaAmly/streams/CAMaAmpkGgJpciDMyeXMAigC': offset 66365 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 14, 2022 12:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:46:56.079Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 14, 2022 12:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:46:56.282Z: Cleaning up.
    Mar 14, 2022 12:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:46:56.371Z: Stopping worker pool...
    Mar 14, 2022 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:49:25.361Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 14, 2022 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-14T00:49:25.438Z: Worker pool stopped.
    Mar 14, 2022 12:49:31 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-13_17_44_55-14034567875861144256 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): de998dc0-ee0c-4df0-b91e-6aa2f66267e9 and timestamp: 2022-03-14T00:49:31.190000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.741

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 14, 2022 12:49:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.012 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.014 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 56.808 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/rx5z2zk7urka4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3134

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3134/display/redirect>

Changes:


------------------------------------------
[...truncated 369.59 KB...]
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhBWENHVFNhek5wehoCamQaAmly/streams/GgJqZBoCaXIg-brIaigC': offset 90389 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 13, 2022 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-13T18:47:07.160Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDhBWENHVFNhek5wehoCamQaAmly/streams/CAkaAmpkGgJpciDrrJ-lBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhBWENHVFNhek5wehoCamQaAmly/streams/CAkaAmpkGgJpciDrrJ-lBygC': offset 106421 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhBWENHVFNhek5wehoCamQaAmly/streams/CAkaAmpkGgJpciDrrJ-lBygC': offset 106421 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 13, 2022 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-13T18:47:07.455Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDhBWENHVFNhek5wehoCamQaAmly/streams/CAEaAmpkGgJpciDmgoSVASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhBWENHVFNhek5wehoCamQaAmly/streams/CAEaAmpkGgJpciDmgoSVASgC': offset 82760 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhBWENHVFNhek5wehoCamQaAmly/streams/CAEaAmpkGgJpciDmgoSVASgC': offset 82760 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 13, 2022 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-13T18:47:08.459Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDhBWENHVFNhek5wehoCamQaAmly/streams/CAMaAmpkGgJpciDw8Z6zBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhBWENHVFNhek5wehoCamQaAmly/streams/CAMaAmpkGgJpciDw8Z6zBSgC': offset 82815 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhBWENHVFNhek5wehoCamQaAmly/streams/CAMaAmpkGgJpciDw8Z6zBSgC': offset 82815 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 13, 2022 6:47:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T18:47:12.451Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 13, 2022 6:47:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T18:47:12.596Z: Cleaning up.
    Mar 13, 2022 6:47:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T18:47:12.712Z: Stopping worker pool...
    Mar 13, 2022 6:49:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T18:49:33.758Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 13, 2022 6:49:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T18:49:33.825Z: Worker pool stopped.
    Mar 13, 2022 6:49:38 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-13_11_45_11-16657074340562287892 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 26a3734b-33fd-4d1a-991e-b2bfd46a9c50 and timestamp: 2022-03-13T18:49:39.043000000Z:
                     Metric:                    Value:
                   read_time                    12.226
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 13, 2022 6:49:39 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 48.25 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 16s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/khv7y2nwq3cao

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3133

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3133/display/redirect>

Changes:


------------------------------------------
[...truncated 352.34 KB...]
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@15076442]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 13, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 13, 2022 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 13, 2022 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 13, 2022 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 13, 2022 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4853230122424036726.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-8MUEoHPHTqdfW64tkKWvLE75MUivRPU5akJZ-U6vk3c.jar
    Mar 13, 2022 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 13, 2022 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 13, 2022 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash b8c4946dbea87e638341a12c8b1bbad5222a2017f6bd8d8384ada0bec9eaee0e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-uMSUbb6ofmODQaEsixu61SIqIBf2vY2DhK2gvsnq7g4.pb
    Mar 13, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 13, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 13, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 13, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 13, 2022 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-13_05_45_06-14467792192728459297?project=apache-beam-testing
    Mar 13, 2022 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-13_05_45_06-14467792192728459297
    Mar 13, 2022 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-13_05_45_06-14467792192728459297
    Mar 13, 2022 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-13T12:45:08.350Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 13, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:45:19.636Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 13, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:45:20.335Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 13, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:45:20.372Z: Expanding GroupByKey operations into optimizable parts.
    Mar 13, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:45:20.405Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 13, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:45:20.491Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 13, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:45:20.522Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 13, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:45:20.550Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 13, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:45:20.881Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 13, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:45:20.994Z: Starting 5 workers in us-central1-b...
    Mar 13, 2022 12:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:45:38.821Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 13, 2022 12:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:46:05.334Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 13, 2022 12:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:46:34.149Z: Workers have started successfully.
    Mar 13, 2022 12:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:46:34.213Z: Workers have started successfully.
    Mar 13, 2022 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-13T12:47:06.728Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHp4S1V5R1ZFSVBHNxoCamQaAmly/streams/CAYaAmpkGgJpciCwpfbHASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHp4S1V5R1ZFSVBHNxoCamQaAmly/streams/CAYaAmpkGgJpciCwpfbHASgC': offset 85315 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHp4S1V5R1ZFSVBHNxoCamQaAmly/streams/CAYaAmpkGgJpciCwpfbHASgC': offset 85315 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 13, 2022 12:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:47:08.895Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 13, 2022 12:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:47:09.098Z: Cleaning up.
    Mar 13, 2022 12:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:47:09.166Z: Stopping worker pool...
    Mar 13, 2022 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:49:30.911Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 13, 2022 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T12:49:30.963Z: Worker pool stopped.
    Mar 13, 2022 12:49:36 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-13_05_45_06-14467792192728459297 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1c830077-9184-4236-ae75-36341ef375eb and timestamp: 2022-03-13T12:49:36.408000000Z:
                     Metric:                    Value:
                   read_time                     8.939
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 13, 2022 12:49:36 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 49.335 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/t4i3kc3zpaglq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3132

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3132/display/redirect>

Changes:


------------------------------------------
[...truncated 357.01 KB...]
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 13, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 13, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 13, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 13, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 13, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 13, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 13, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 13, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 13, 2022 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 13, 2022 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 13, 2022 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 13, 2022 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6774907868293533535.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-m9qRmcf3c4VSfQ2eK-QHIlRpqhdG0_DkaERn5tSoim8.jar
    Mar 13, 2022 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 13, 2022 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 13, 2022 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145674 bytes, hash 55acc9917e89540208ad9c72aee38cf24ce3a150647fe57054e0ed77edde3220> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-VazJkX6JVAIIrZxyruOM8kzjoVBkf-VwVODtd-3eMiA.pb
    Mar 13, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 13, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 13, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 13, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 13, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-12_22_45_10-5055698595194007123?project=apache-beam-testing
    Mar 13, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-12_22_45_10-5055698595194007123
    Mar 13, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-12_22_45_10-5055698595194007123
    Mar 13, 2022 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-13T06:45:11.221Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 13, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:45:19.209Z: Worker configuration: e2-standard-2 in us-central1-c.
    Mar 13, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:45:19.909Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 13, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:45:19.947Z: Expanding GroupByKey operations into optimizable parts.
    Mar 13, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:45:19.979Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 13, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:45:20.035Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 13, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:45:20.067Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 13, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:45:20.098Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 13, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:45:20.497Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 13, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:45:20.562Z: Starting 5 workers in us-central1-c...
    Mar 13, 2022 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:45:27.117Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 13, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:45:59.271Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 13, 2022 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:46:25.504Z: Workers have started successfully.
    Mar 13, 2022 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-13T06:46:55.481Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEVTV0UxeER5WlZoSBoCamQaAmly/streams/CAEaAmpkGgJpciC6l5GMBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEVTV0UxeER5WlZoSBoCamQaAmly/streams/CAEaAmpkGgJpciC6l5GMBigC': offset 107045 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEVTV0UxeER5WlZoSBoCamQaAmly/streams/CAEaAmpkGgJpciC6l5GMBigC': offset 107045 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 13, 2022 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-13T06:46:55.510Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEVTV0UxeER5WlZoSBoCamQaAmly/streams/CAYaAmpkGgJpciDDlInAASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEVTV0UxeER5WlZoSBoCamQaAmly/streams/CAYaAmpkGgJpciDDlInAASgC': offset 64456 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEVTV0UxeER5WlZoSBoCamQaAmly/streams/CAYaAmpkGgJpciDDlInAASgC': offset 64456 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 13, 2022 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:46:58.271Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 13, 2022 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:46:58.507Z: Cleaning up.
    Mar 13, 2022 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:46:58.626Z: Stopping worker pool...
    Mar 13, 2022 6:49:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:49:15.818Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 13, 2022 6:49:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T06:49:15.856Z: Worker pool stopped.
    Mar 13, 2022 6:49:20 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-12_22_45_10-5055698595194007123 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1fabb37c-1e45-449b-89f3-36925ddde4eb and timestamp: 2022-03-13T06:49:21.034000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.984

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 13, 2022 6:49:21 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 31.387 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 59s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/3pn6zoxf2evqy

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3131

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3131/display/redirect?page=changes>

Changes:

[noreply] [BEAM-14072] [BEAM-13993] [BEAM-10039] Import beam plugins before


------------------------------------------
[...truncated 357.21 KB...]
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 13, 2022 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 13, 2022 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 13, 2022 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 13, 2022 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 13, 2022 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 13, 2022 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 13, 2022 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 13, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 13, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 13, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test894602903944300528.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-BOISaHrGJOiHg-82ZHyvwqUEQMo5OHF8DSTOly-Fqqk.jar
    Mar 13, 2022 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 13, 2022 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 13, 2022 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash b2b0f491a45106f978eef60d902bdb1e7fd61596faa9b2541ab69c3373809607> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-srD0kaRRBvl47vYNkCvbHn_WFZb6qbJUGracM3OAlgc.pb
    Mar 13, 2022 12:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 13, 2022 12:45:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 13, 2022 12:45:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 13, 2022 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 13, 2022 12:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-12_16_45_14-1014706705353705305?project=apache-beam-testing
    Mar 13, 2022 12:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-12_16_45_14-1014706705353705305
    Mar 13, 2022 12:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-12_16_45_14-1014706705353705305
    Mar 13, 2022 12:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-13T00:45:16.039Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 13, 2022 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:45:27.590Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 13, 2022 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:45:28.284Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 13, 2022 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:45:28.316Z: Expanding GroupByKey operations into optimizable parts.
    Mar 13, 2022 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:45:28.351Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 13, 2022 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:45:28.415Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 13, 2022 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:45:28.434Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 13, 2022 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:45:28.460Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 13, 2022 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:45:28.799Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 13, 2022 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:45:28.875Z: Starting 5 workers in us-central1-f...
    Mar 13, 2022 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:45:41.740Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 13, 2022 12:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:46:04.024Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Mar 13, 2022 12:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:46:04.054Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Mar 13, 2022 12:46:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:46:14.305Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 13, 2022 12:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:46:39.203Z: Workers have started successfully.
    Mar 13, 2022 12:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:46:39.234Z: Workers have started successfully.
    Mar 13, 2022 12:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-13T00:47:10.645Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEdRU2E2VWFodE9LaBoCamQaAmly/streams/CAQaAmpkGgJpciCLjoq4BigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEdRU2E2VWFodE9LaBoCamQaAmly/streams/CAQaAmpkGgJpciCLjoq4BigC': offset 75064 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEdRU2E2VWFodE9LaBoCamQaAmly/streams/CAQaAmpkGgJpciCLjoq4BigC': offset 75064 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 13, 2022 12:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-13T00:47:11.762Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEdRU2E2VWFodE9LaBoCamQaAmly/streams/CAUaAmpkGgJpciDR_O-nBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEdRU2E2VWFodE9LaBoCamQaAmly/streams/CAUaAmpkGgJpciDR_O-nBSgC': offset 108468 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEdRU2E2VWFodE9LaBoCamQaAmly/streams/CAUaAmpkGgJpciDR_O-nBSgC': offset 108468 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 13, 2022 12:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:47:13.721Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 13, 2022 12:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:47:13.877Z: Cleaning up.
    Mar 13, 2022 12:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:47:13.957Z: Stopping worker pool...
    Mar 13, 2022 12:49:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:49:43.303Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 13, 2022 12:49:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-13T00:49:43.369Z: Worker pool stopped.
    Mar 13, 2022 12:49:49 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-12_16_45_14-1014706705353705305 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b98b40e8-773b-45d0-b4e4-6e8f4e4209ad and timestamp: 2022-03-13T00:49:49.950000000Z:
                     Metric:                    Value:
                   read_time                     9.933
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 13, 2022 12:49:50 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 56.571 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/a5unnqvqleuye

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3130

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3130/display/redirect>

Changes:


------------------------------------------
[...truncated 365.53 KB...]
    Mar 12, 2022 6:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:45:46.468Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 12, 2022 6:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:45:47.067Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 12, 2022 6:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:45:47.095Z: Expanding GroupByKey operations into optimizable parts.
    Mar 12, 2022 6:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:45:47.121Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 12, 2022 6:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:45:47.205Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 12, 2022 6:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:45:47.241Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 12, 2022 6:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:45:47.281Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 12, 2022 6:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:45:47.606Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 12, 2022 6:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:45:47.683Z: Starting 5 workers in us-central1-b...
    Mar 12, 2022 6:46:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:46:14.359Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 12, 2022 6:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:46:23.329Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Mar 12, 2022 6:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:46:23.367Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Mar 12, 2022 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:46:33.688Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 12, 2022 6:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:46:56.779Z: Workers have started successfully.
    Mar 12, 2022 6:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:46:56.803Z: Workers have started successfully.
    Mar 12, 2022 6:47:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-12T18:47:29.366Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFViekZOanJKdmt0ZxoCamQaAmly/streams/CAYaAmpkGgJpciCf8cv2BigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFViekZOanJKdmt0ZxoCamQaAmly/streams/CAYaAmpkGgJpciCf8cv2BigC': offset 128424 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFViekZOanJKdmt0ZxoCamQaAmly/streams/CAYaAmpkGgJpciCf8cv2BigC': offset 128424 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 12, 2022 6:47:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-12T18:47:30.474Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFViekZOanJKdmt0ZxoCamQaAmly/streams/GgJqZBoCaXIg7bzG7QEoAg"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFViekZOanJKdmt0ZxoCamQaAmly/streams/GgJqZBoCaXIg7bzG7QEoAg': offset 119369 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFViekZOanJKdmt0ZxoCamQaAmly/streams/GgJqZBoCaXIg7bzG7QEoAg': offset 119369 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 12, 2022 6:47:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-12T18:47:30.861Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFViekZOanJKdmt0ZxoCamQaAmly/streams/CAgaAmpkGgJpciCM08emASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFViekZOanJKdmt0ZxoCamQaAmly/streams/CAgaAmpkGgJpciCM08emASgC': offset 84915 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFViekZOanJKdmt0ZxoCamQaAmly/streams/CAgaAmpkGgJpciCM08emASgC': offset 84915 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 12, 2022 6:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:47:33.758Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 12, 2022 6:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:47:33.910Z: Cleaning up.
    Mar 12, 2022 6:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:47:33.977Z: Stopping worker pool...
    Mar 12, 2022 6:50:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:50:06.707Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 12, 2022 6:50:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T18:50:06.746Z: Worker pool stopped.
    Mar 12, 2022 6:50:12 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-12_10_45_34-13314844572524140705 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 385b6ffb-6a64-4219-b2c1-80c668202847 and timestamp: 2022-03-12T18:50:12.257000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.933

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 12, 2022 6:50:12 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 5 mins 11.11 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 48s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/sbtn52zn43fns

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3129

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3129/display/redirect>

Changes:


------------------------------------------
[...truncated 345.92 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ccb68118acb6a542505972be2c895060
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 12, 2022 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 12, 2022 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 12, 2022 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 371 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 12, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 12, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 12, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 12, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 12, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 12, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 12, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1293369737]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 12, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 12, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 12, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 12, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 12, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 12, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 12, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@15076442]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 12, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 12, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 12, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 12, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 12, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 12, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 12, 2022 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 12, 2022 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 12, 2022 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 12, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 12, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6291052487231147816.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-yy6EyhKw-EKee7Ja7WzIXwqWhFxh1PrBhyYQD7ByGzI.jar
    Mar 12, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 12, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 12, 2022 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 12, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145670 bytes, hash 3e17d379bfb9e204cd078ee2ffa534b1a7a091713f350ca74d04d6c5e2701eff> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-PhfTeb-54gTNB47i_6U0saegkXE_NQynTQTWxeJwHv8.pb
    Mar 12, 2022 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 12, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 12, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 12, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 12, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-12_04_45_08-8598198438378815748?project=apache-beam-testing
    Mar 12, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-12_04_45_08-8598198438378815748
    Mar 12, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-12_04_45_08-8598198438378815748
    Mar 12, 2022 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-12T12:45:09.537Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 12, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:45:18.878Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 12, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:45:19.529Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 12, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:45:19.563Z: Expanding GroupByKey operations into optimizable parts.
    Mar 12, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:45:19.613Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 12, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:45:19.689Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 12, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:45:19.725Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 12, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:45:19.759Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 12, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:45:20.126Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 12, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:45:20.207Z: Starting 5 workers in us-central1-b...
    Mar 12, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:45:24.747Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 12, 2022 12:46:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:46:05.053Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 12, 2022 12:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:46:28.631Z: Workers have started successfully.
    Mar 12, 2022 12:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:46:28.650Z: Workers have started successfully.
    Mar 12, 2022 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:47:00.474Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 12, 2022 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:47:00.664Z: Cleaning up.
    Mar 12, 2022 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:47:00.760Z: Stopping worker pool...
    Mar 12, 2022 12:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:49:29.104Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 12, 2022 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T12:49:29.177Z: Worker pool stopped.
    Mar 12, 2022 12:49:35 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-12_04_45_08-8598198438378815748 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8c4c260e-3ab4-46c0-9c7c-74eec6d42041 and timestamp: 2022-03-12T12:49:35.833000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.004

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 12, 2022 12:49:35 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 47.786 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 16s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/goq565ujb7u4m

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3128

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3128/display/redirect>

Changes:


------------------------------------------
[...truncated 372.93 KB...]
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHFhNERUNzFESlh2ehoCamQaAmly/streams/CAgaAmpkGgJpciDftfN8KAI': offset 118055 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 12, 2022 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-12T06:46:58.264Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHFhNERUNzFESlh2ehoCamQaAmly/streams/CAIaAmpkGgJpciCR3dVJKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHFhNERUNzFESlh2ehoCamQaAmly/streams/CAIaAmpkGgJpciCR3dVJKAI': offset 119949 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHFhNERUNzFESlh2ehoCamQaAmly/streams/CAIaAmpkGgJpciCR3dVJKAI': offset 119949 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 12, 2022 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-12T06:46:58.307Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHFhNERUNzFESlh2ehoCamQaAmly/streams/CAYaAmpkGgJpciDau6_UBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHFhNERUNzFESlh2ehoCamQaAmly/streams/CAYaAmpkGgJpciDau6_UBygC': offset 86180 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHFhNERUNzFESlh2ehoCamQaAmly/streams/CAYaAmpkGgJpciDau6_UBygC': offset 86180 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 12, 2022 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-12T06:46:58.307Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHFhNERUNzFESlh2ehoCamQaAmly/streams/CAkaAmpkGgJpciC2vIOwAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHFhNERUNzFESlh2ehoCamQaAmly/streams/CAkaAmpkGgJpciC2vIOwAygC': offset 87569 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHFhNERUNzFESlh2ehoCamQaAmly/streams/CAkaAmpkGgJpciC2vIOwAygC': offset 87569 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 12, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T06:47:03.210Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 12, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T06:47:03.367Z: Cleaning up.
    Mar 12, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T06:47:03.482Z: Stopping worker pool...
    Mar 12, 2022 6:49:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T06:49:30.721Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 12, 2022 6:49:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T06:49:30.773Z: Worker pool stopped.
    Mar 12, 2022 6:49:36 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-11_22_45_05-12553095873526892467 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 39c2967a-4eb9-4cd2-a44e-bbc965d58adc and timestamp: 2022-03-12T06:49:36.983000000Z:
                     Metric:                    Value:
                   read_time                    12.706
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 12, 2022 6:49:37 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 51.618 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/3uth34a6npmla

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3127

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3127/display/redirect>

Changes:


------------------------------------------
[...truncated 356.65 KB...]
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 12, 2022 12:44:46 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 12, 2022 12:44:46 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 12, 2022 12:44:46 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 12, 2022 12:44:46 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 12, 2022 12:44:46 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 12, 2022 12:44:46 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 12, 2022 12:44:46 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 12, 2022 12:44:46 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 12, 2022 12:44:46 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 12, 2022 12:44:50 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 12, 2022 12:44:50 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 12, 2022 12:44:50 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test108001770772001975.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-aud7mcy4GVGTKrsKzv8MBlQojUO8xGl50EkuFAEoFFE.jar
    Mar 12, 2022 12:44:51 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 12, 2022 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 12, 2022 12:44:51 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash c554ce528283a4fdaa673a96216a7ef55d0f4b417c7ee165f0bbc8a234b2ee6c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-xVTOUoKDpP2qZzqWIWp-9V0PS0F8fuFl8LvIojSy7mw.pb
    Mar 12, 2022 12:44:54 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 12, 2022 12:44:54 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 12, 2022 12:44:54 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 12, 2022 12:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 12, 2022 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-11_16_44_54-4651964789235307366?project=apache-beam-testing
    Mar 12, 2022 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-11_16_44_54-4651964789235307366
    Mar 12, 2022 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-11_16_44_54-4651964789235307366
    Mar 12, 2022 12:45:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-12T00:44:59.802Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 12, 2022 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:45:39.279Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 12, 2022 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:45:41.705Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 12, 2022 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:45:42.623Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 12, 2022 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:45:42.676Z: Expanding GroupByKey operations into optimizable parts.
    Mar 12, 2022 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:45:42.712Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 12, 2022 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:45:42.793Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 12, 2022 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:45:42.848Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 12, 2022 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:45:42.884Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 12, 2022 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:45:43.525Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 12, 2022 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:45:43.709Z: Starting 5 workers in us-central1-f...
    Mar 12, 2022 12:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:46:32.690Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 12, 2022 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:46:58.875Z: Workers have started successfully.
    Mar 12, 2022 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:46:58.906Z: Workers have started successfully.
    Mar 12, 2022 12:47:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-12T00:47:29.735Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFZwQVQ5NnlVOVhKWBoCamQaAmly/streams/CAgaAmpkGgJpciCfrczpAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFZwQVQ5NnlVOVhKWBoCamQaAmly/streams/CAgaAmpkGgJpciCfrczpAygC': offset 116284 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFZwQVQ5NnlVOVhKWBoCamQaAmly/streams/CAgaAmpkGgJpciCfrczpAygC': offset 116284 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 12, 2022 12:47:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-12T00:47:30.724Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFZwQVQ5NnlVOVhKWBoCamQaAmly/streams/CAEaAmpkGgJpciDjyePDBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFZwQVQ5NnlVOVhKWBoCamQaAmly/streams/CAEaAmpkGgJpciDjyePDBigC': offset 109913 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFZwQVQ5NnlVOVhKWBoCamQaAmly/streams/CAEaAmpkGgJpciDjyePDBigC': offset 109913 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 12, 2022 12:47:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:47:32.083Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 12, 2022 12:47:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:47:32.245Z: Cleaning up.
    Mar 12, 2022 12:47:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:47:32.337Z: Stopping worker pool...
    Mar 12, 2022 12:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:49:52.984Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 12, 2022 12:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-12T00:49:53.300Z: Worker pool stopped.
    Mar 12, 2022 12:50:02 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-11_16_44_54-4651964789235307366 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5dd5a323-1eb6-4ce2-9769-d6156a04d25c and timestamp: 2022-03-12T00:50:02.955000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.678

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 12, 2022 12:50:03 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 11 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.003 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker Thread 5,5,main]) completed. Took 5 mins 27.164 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 40s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/6dxultyr6vpl6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3126

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3126/display/redirect?page=changes>

Changes:

[Ismaël Mejía] [BEAM-13981] Remove Spark Runner specific code for event logging

[vitaly.terentyev] [BEAM-2766] Support null key/values in HadoopFormatIO

[vitaly.terentyev] [BEAM-2766] Fix checkstyle


------------------------------------------
[...truncated 347.02 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ccb68118acb6a542505972be2c895060
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 11, 2022 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 11, 2022 6:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 11, 2022 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 371 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 11, 2022 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 11, 2022 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 11, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 11, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 11, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 11, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 11, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1293369737]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 11, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 11, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 11, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 11, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 11, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 11, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 11, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@15076442]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 11, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 11, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 11, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 11, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 11, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 11, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 11, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 11, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 11, 2022 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 11, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 11, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 11, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8772733238607983803.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-XpTR-qb3lfojocnn5Y0okus2UemU9dp2mzbMdduVSTg.jar
    Mar 11, 2022 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 11, 2022 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 11, 2022 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash 16b8e9ca4cb5a0d33ae7667ca58e3732b377cda880975e129369b53d1c3ec051> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Frjpyky1oNM652Z8pY43MrN3zaiAl14Sk2m1PRw-wFE.pb
    Mar 11, 2022 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 11, 2022 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 11, 2022 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 11, 2022 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 11, 2022 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-11_10_45_08-16628749105258873883?project=apache-beam-testing
    Mar 11, 2022 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-11_10_45_08-16628749105258873883
    Mar 11, 2022 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-11_10_45_08-16628749105258873883
    Mar 11, 2022 6:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-11T18:45:09.910Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 11, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:45:19.546Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 11, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:45:20.284Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 11, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:45:20.317Z: Expanding GroupByKey operations into optimizable parts.
    Mar 11, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:45:20.357Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 11, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:45:20.468Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 11, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:45:20.505Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 11, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:45:20.539Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 11, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:45:20.978Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 11, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:45:21.063Z: Starting 5 workers in us-central1-b...
    Mar 11, 2022 6:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:45:37.295Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 11, 2022 6:47:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:47:49.071Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 11, 2022 6:48:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:48:15.016Z: Workers have started successfully.
    Mar 11, 2022 6:48:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:48:15.056Z: Workers have started successfully.
    Mar 11, 2022 6:48:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:48:50.160Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 11, 2022 6:48:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:48:50.302Z: Cleaning up.
    Mar 11, 2022 6:48:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:48:50.378Z: Stopping worker pool...
    Mar 11, 2022 6:51:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:51:23.181Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 11, 2022 6:51:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T18:51:23.292Z: Worker pool stopped.
    Mar 11, 2022 6:51:30 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-11_10_45_08-16628749105258873883 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 98b1de34-8fc9-4652-8772-6f61d56ded1d and timestamp: 2022-03-11T18:51:30.187000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      7.22

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 11, 2022 6:51:30 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.039 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 6 mins 42.143 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 10s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/tdnpm4cvigixc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3125

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3125/display/redirect>

Changes:


------------------------------------------
[...truncated 357.19 KB...]
    Mar 11, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 11, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 11, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 11, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 11, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@15076442]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 11, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 11, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 11, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 11, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 11, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 11, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 11, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 11, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 11, 2022 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 11, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 11, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 11, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.38.0-SNAPSHOT-vPEGqxI9MKGLtuCuQz0HzI-33fdG7YkDTKUwhwE408c.jar
    Mar 11, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.38.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.38.0-SNAPSHOT-tests-9waRkOC4eM0X7ABYrPq1tRErTw5OgxB_aYDH-VkPcDE.jar
    Mar 11, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.38.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.38.0-SNAPSHOT-tests-WW_qh_qZ0OtxNK1HIZTckB7QnP0NV0saj8TCBpbuXbM.jar
    Mar 11, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.38.0-SNAPSHOT-Tl-rqROyfFjUbm1pi2LQ2tdsZFwawXrRlgBoBaQ0-f4.jar
    Mar 11, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test630690929321281007.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-GYb3AWBAj19GlsV1X-nNWum2Lxr4Gj4VxFMo-yeZ9w8.jar
    Mar 11, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/hadoop-common/build/libs/beam-sdks-java-io-hadoop-common-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-hadoop-common-2.38.0-SNAPSHOT--Z7FaTJuLSyiPWeUL9VxAgqy53Gn2R4elBkiZEpHHyU.jar
    Mar 11, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/parquet/build/libs/beam-sdks-java-io-parquet-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-parquet-2.38.0-SNAPSHOT--Y3Ki6BsSIhDqZLGsTPV4L2LNYlAKfQgFkOYYPov-fs.jar
    Mar 11, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/join-library/build/libs/beam-sdks-java-extensions-join-library-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-join-library-2.38.0-SNAPSHOT-xz-aSlhizJi8jigGo3amZ2g_MkglE3dBulicsi2UX5U.jar
    Mar 11, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.38.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Mar 11, 2022 12:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 363 files cached, 9 files newly uploaded in 0 seconds
    Mar 11, 2022 12:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 11, 2022 12:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash 0a29f1db7f0b9cf64d7eebc45bad9947fca40595ecb0cbf75c58eccd5a43413e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Cinx238LnPZNfuvEW62ZR_ykBZXssMv3XFjszVpDQT4.pb
    Mar 11, 2022 12:45:31 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 11, 2022 12:45:31 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 11, 2022 12:45:31 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 11, 2022 12:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 11, 2022 12:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-11_04_45_32-568435567492569534?project=apache-beam-testing
    Mar 11, 2022 12:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-11_04_45_32-568435567492569534
    Mar 11, 2022 12:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-11_04_45_32-568435567492569534
    Mar 11, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-11T12:45:34.930Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 11, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:45:50.320Z: Worker configuration: e2-standard-2 in us-central1-c.
    Mar 11, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:45:51.055Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 11, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:45:51.101Z: Expanding GroupByKey operations into optimizable parts.
    Mar 11, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:45:51.136Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 11, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:45:51.229Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 11, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:45:51.254Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 11, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:45:51.288Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 11, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:45:51.706Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 11, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:45:51.781Z: Starting 5 workers in us-central1-c...
    Mar 11, 2022 12:46:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:46:14.763Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 11, 2022 12:46:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:46:36.089Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 11, 2022 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:47:00.292Z: Workers have started successfully.
    Mar 11, 2022 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:47:00.325Z: Workers have started successfully.
    Mar 11, 2022 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-11T12:47:36.407Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHZrTXZobXFqcVdMdhoCamQaAmly/streams/CAYaAmpkGgJpciChssCOBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHZrTXZobXFqcVdMdhoCamQaAmly/streams/CAYaAmpkGgJpciChssCOBSgC': offset 75872 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHZrTXZobXFqcVdMdhoCamQaAmly/streams/CAYaAmpkGgJpciChssCOBSgC': offset 75872 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 11, 2022 12:47:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:47:39.030Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 11, 2022 12:47:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:47:39.185Z: Cleaning up.
    Mar 11, 2022 12:47:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:47:39.260Z: Stopping worker pool...
    Mar 11, 2022 12:50:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:50:00.232Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 11, 2022 12:50:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T12:50:00.285Z: Worker pool stopped.
    Mar 11, 2022 12:50:09 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-11_04_45_32-568435567492569534 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 90f5b896-3683-4805-812d-86c359befdb6 and timestamp: 2022-03-11T12:50:09.082000000Z:
                     Metric:                    Value:
                   read_time                     8.385
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 11, 2022 12:50:09 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 59.725 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 49s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/dgh2pmffs6zba

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3124

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3124/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #17056 from [BEAM-14076] [SnowflakeIO] Add support


------------------------------------------
[...truncated 363.33 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-10_22_45_38-1971296153544717949
    Mar 11, 2022 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-11T06:45:39.268Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 11, 2022 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:46:15.444Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 11, 2022 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:46:16.123Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 11, 2022 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:46:16.167Z: Expanding GroupByKey operations into optimizable parts.
    Mar 11, 2022 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:46:16.196Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 11, 2022 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:46:16.257Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 11, 2022 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:46:16.280Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 11, 2022 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:46:16.346Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 11, 2022 6:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:46:16.728Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 11, 2022 6:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:46:16.807Z: Starting 5 workers in us-central1-b...
    Mar 11, 2022 6:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:46:41.307Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 11, 2022 6:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:47:09.062Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 11, 2022 6:47:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:47:33.131Z: Workers have started successfully.
    Mar 11, 2022 6:47:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:47:33.162Z: Workers have started successfully.
    Mar 11, 2022 6:48:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-11T06:48:04.640Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG1vTUpMbEVvWkxWOBoCamQaAmly/streams/CAMaAmpkGgJpciDL1bqVBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG1vTUpMbEVvWkxWOBoCamQaAmly/streams/CAMaAmpkGgJpciDL1bqVBSgC': offset 70493 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG1vTUpMbEVvWkxWOBoCamQaAmly/streams/CAMaAmpkGgJpciDL1bqVBSgC': offset 70493 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 11, 2022 6:48:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-11T06:48:04.681Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG1vTUpMbEVvWkxWOBoCamQaAmly/streams/CAQaAmpkGgJpciD4pe-CBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG1vTUpMbEVvWkxWOBoCamQaAmly/streams/CAQaAmpkGgJpciD4pe-CBSgC': offset 95747 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG1vTUpMbEVvWkxWOBoCamQaAmly/streams/CAQaAmpkGgJpciD4pe-CBSgC': offset 95747 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 11, 2022 6:48:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-11T06:48:04.701Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG1vTUpMbEVvWkxWOBoCamQaAmly/streams/CAkaAmpkGgJpciC9-_-4BigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG1vTUpMbEVvWkxWOBoCamQaAmly/streams/CAkaAmpkGgJpciC9-_-4BigC': offset 100009 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG1vTUpMbEVvWkxWOBoCamQaAmly/streams/CAkaAmpkGgJpciC9-_-4BigC': offset 100009 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 11, 2022 6:48:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:48:07.811Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 11, 2022 6:48:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:48:07.980Z: Cleaning up.
    Mar 11, 2022 6:48:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:48:08.071Z: Stopping worker pool...
    Mar 11, 2022 6:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:50:34.696Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 11, 2022 6:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T06:50:34.748Z: Worker pool stopped.
    Mar 11, 2022 6:50:40 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-10_22_45_38-1971296153544717949 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4c5ddab1-2692-4849-8831-0b33a92ca17e and timestamp: 2022-03-11T06:50:41.043000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    11.011

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 11, 2022 6:50:41 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 26.131 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 18s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/4gozkjbftdkvu

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3123

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3123/display/redirect?page=changes>

Changes:

[stranniknm] [BEAM-14079] playground - improve accessibility

[noreply] [BEAM-13925] Find and address prs that havent been reviewed in a week

[noreply] Fix import path

[noreply] [BEAM-13925] Fix one more import path

[noreply] Add a StatefulDoFn test that sets event time timer within allowed


------------------------------------------
[...truncated 385.48 KB...]
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG5UanJhVW5CMWxjbxoCamQaAmly/streams/CAgaAmpkGgJpciCftOugAygC': offset 111312 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 11, 2022 12:49:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-11T00:49:56.627Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG5UanJhVW5CMWxjbxoCamQaAmly/streams/GgJqZBoCaXIgt-eu_wIoAg"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG5UanJhVW5CMWxjbxoCamQaAmly/streams/GgJqZBoCaXIgt-eu_wIoAg': offset 75307 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG5UanJhVW5CMWxjbxoCamQaAmly/streams/GgJqZBoCaXIgt-eu_wIoAg': offset 75307 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 11, 2022 12:49:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-11T00:49:56.632Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG5UanJhVW5CMWxjbxoCamQaAmly/streams/CAIaAmpkGgJpciCm45QiKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG5UanJhVW5CMWxjbxoCamQaAmly/streams/CAIaAmpkGgJpciCm45QiKAI': offset 90367 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG5UanJhVW5CMWxjbxoCamQaAmly/streams/CAIaAmpkGgJpciCm45QiKAI': offset 90367 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 11, 2022 12:49:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-11T00:49:57.758Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG5UanJhVW5CMWxjbxoCamQaAmly/streams/CAcaAmpkGgJpciCM-ZvXBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG5UanJhVW5CMWxjbxoCamQaAmly/streams/CAcaAmpkGgJpciCM-ZvXBSgC': offset 90822 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG5UanJhVW5CMWxjbxoCamQaAmly/streams/CAcaAmpkGgJpciCM-ZvXBSgC': offset 90822 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 11, 2022 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T00:50:02.756Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 11, 2022 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T00:50:02.889Z: Cleaning up.
    Mar 11, 2022 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T00:50:02.963Z: Stopping worker pool...
    Mar 11, 2022 12:52:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T00:52:25.875Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 11, 2022 12:52:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-11T00:52:25.925Z: Worker pool stopped.
    Mar 11, 2022 12:52:53 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-10_16_48_00-6782695812934778001 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 72f1fad7-f474-4ccb-9f80-62b44ed8df55 and timestamp: 2022-03-11T00:52:53.794000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    12.854

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 11, 2022 12:52:54 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.115 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.114 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 5 mins 45.993 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 30s
165 actionable tasks: 110 executed, 53 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/7mr4c2icpqark

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3122

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3122/display/redirect?page=changes>

Changes:

[ihr] [BEAM-13923] Fix the answers placeholders locations in the Java katas

[jakub.kukul] [BEAM-14039] Propagate ignore_unknown_columns parameter.


------------------------------------------
[...truncated 358.75 KB...]
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 10, 2022 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 10, 2022 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 10, 2022 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 10, 2022 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 10, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 10, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 10, 2022 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 10, 2022 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 10, 2022 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 10, 2022 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8113550750292274264.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-rg4hzGgxMjbcmjtXfvO_xZPuHN9Y8dLkrlGpnrT0OJQ.jar
    Mar 10, 2022 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 10, 2022 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 10, 2022 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145671 bytes, hash 40594741d90565c47798bba8e0883370e8e0349cacaa888d1e6390472dfffa85> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-QFlHQdkFZcR3mLuo4IgzcOjgNJysqoiNHmOQRy3_-oU.pb
    Mar 10, 2022 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 10, 2022 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 10, 2022 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 10, 2022 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 10, 2022 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-10_10_45_05-13201859082540727692?project=apache-beam-testing
    Mar 10, 2022 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-10_10_45_05-13201859082540727692
    Mar 10, 2022 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-10_10_45_05-13201859082540727692
    Mar 10, 2022 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-10T18:45:06.906Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 10, 2022 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:45:17.523Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 10, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:45:18.231Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 10, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:45:18.271Z: Expanding GroupByKey operations into optimizable parts.
    Mar 10, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:45:18.299Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 10, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:45:18.383Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 10, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:45:18.412Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 10, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:45:18.445Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 10, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:45:18.779Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 10, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:45:18.843Z: Starting 5 workers in us-central1-f...
    Mar 10, 2022 6:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:45:47.981Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 10, 2022 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:45:59.007Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Mar 10, 2022 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:45:59.035Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Mar 10, 2022 6:46:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:46:09.285Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 10, 2022 6:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:46:29.680Z: Workers have started successfully.
    Mar 10, 2022 6:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:46:29.718Z: Workers have started successfully.
    Mar 10, 2022 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-10T18:47:04.499Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGVqMTZhRGExeDdDcxoCamQaAmly/streams/GgJqZBoCaXIgrtamiQcoAg"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGVqMTZhRGExeDdDcxoCamQaAmly/streams/GgJqZBoCaXIgrtamiQcoAg': offset 73727 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGVqMTZhRGExeDdDcxoCamQaAmly/streams/GgJqZBoCaXIgrtamiQcoAg': offset 73727 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 10, 2022 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-10T18:47:04.526Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGVqMTZhRGExeDdDcxoCamQaAmly/streams/CAUaAmpkGgJpciDjopKTAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGVqMTZhRGExeDdDcxoCamQaAmly/streams/CAUaAmpkGgJpciDjopKTAygC': offset 82720 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGVqMTZhRGExeDdDcxoCamQaAmly/streams/CAUaAmpkGgJpciDjopKTAygC': offset 82720 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 10, 2022 6:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:47:09.881Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 10, 2022 6:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:47:10.191Z: Cleaning up.
    Mar 10, 2022 6:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:47:10.266Z: Stopping worker pool...
    Mar 10, 2022 6:49:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:49:38.795Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 10, 2022 6:49:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T18:49:38.842Z: Worker pool stopped.
    Mar 10, 2022 6:49:44 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-10_10_45_05-13201859082540727692 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cc867519-49b7-429a-b79a-c38da77d4dd4 and timestamp: 2022-03-10T18:49:44.259000000Z:
                     Metric:                    Value:
                   read_time                    13.008
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 10, 2022 6:49:44 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 57.791 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 24s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/vrfkrxmabk4ky

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3121

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3121/display/redirect>

Changes:


------------------------------------------
[...truncated 368.18 KB...]
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhqSm5hdUVWZS1QcRoCamQaAmly/streams/CAgaAmpkGgJpciDg8qDbBygC': offset 81606 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 10, 2022 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-10T12:47:00.736Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDhqSm5hdUVWZS1QcRoCamQaAmly/streams/CAMaAmpkGgJpciCA059DKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhqSm5hdUVWZS1QcRoCamQaAmly/streams/CAMaAmpkGgJpciCA059DKAI': offset 123780 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhqSm5hdUVWZS1QcRoCamQaAmly/streams/CAMaAmpkGgJpciCA059DKAI': offset 123780 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 10, 2022 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-10T12:47:00.952Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDhqSm5hdUVWZS1QcRoCamQaAmly/streams/CAYaAmpkGgJpciDS2_HBAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhqSm5hdUVWZS1QcRoCamQaAmly/streams/CAYaAmpkGgJpciDS2_HBAigC': offset 81869 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhqSm5hdUVWZS1QcRoCamQaAmly/streams/CAYaAmpkGgJpciDS2_HBAigC': offset 81869 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 10, 2022 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-10T12:47:00.986Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDhqSm5hdUVWZS1QcRoCamQaAmly/streams/CAkaAmpkGgJpciD-4a3cBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhqSm5hdUVWZS1QcRoCamQaAmly/streams/CAkaAmpkGgJpciD-4a3cBigC': offset 66652 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhqSm5hdUVWZS1QcRoCamQaAmly/streams/CAkaAmpkGgJpciD-4a3cBigC': offset 66652 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 10, 2022 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T12:47:05.202Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 10, 2022 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T12:47:05.338Z: Cleaning up.
    Mar 10, 2022 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T12:47:05.406Z: Stopping worker pool...
    Mar 10, 2022 12:49:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T12:49:37.120Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 10, 2022 12:49:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T12:49:37.173Z: Worker pool stopped.
    Mar 10, 2022 12:49:42 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-10_04_45_07-8750196810301579145 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b47a0810-6922-4a04-b943-848b0e297907 and timestamp: 2022-03-10T12:49:42.374000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    12.187

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 10, 2022 12:49:42 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 54.903 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 23s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/hjhg6svknrgtk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3120

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3120/display/redirect?page=changes>

Changes:

[noreply] Revert "[BEAM-13993] [BEAM-10039] Import beam plugins before starting


------------------------------------------
[...truncated 357.99 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 10, 2022 6:48:33 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 10, 2022 6:48:36 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 10, 2022 6:48:39 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 371 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 10, 2022 6:48:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 10, 2022 6:48:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 10, 2022 6:48:51 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 10, 2022 6:48:51 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 10, 2022 6:48:51 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 10, 2022 6:48:52 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 10, 2022 6:48:53 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@403740863]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 10, 2022 6:48:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 10, 2022 6:48:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 10, 2022 6:48:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 10, 2022 6:48:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 10, 2022 6:48:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 10, 2022 6:48:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 10, 2022 6:48:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@306204573]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 10, 2022 6:48:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 10, 2022 6:48:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 10, 2022 6:48:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 10, 2022 6:48:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 10, 2022 6:48:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 10, 2022 6:48:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 10, 2022 6:48:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 10, 2022 6:48:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 10, 2022 6:48:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 10, 2022 6:49:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 10, 2022 6:49:09 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 10, 2022 6:49:11 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6459314570629010966.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-isNUSuv7cUubgXfx03_asAkhDVIexIxFJNUXZdisYp8.jar
    Mar 10, 2022 6:49:17 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 7 seconds
    Mar 10, 2022 6:49:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 10, 2022 6:49:17 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145672 bytes, hash 72810bb2a728d7889640f76c359b11fc9fdc31ccc0f812acbb88df848fbb0975> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-coELsqco14iWQPdsNZsR_J_cMczA-BKsu4jfhI-7CXU.pb
    Mar 10, 2022 6:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 10, 2022 6:49:27 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 10, 2022 6:49:27 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 10, 2022 6:49:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 10, 2022 6:49:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-09_22_49_28-3278606628388839370?project=apache-beam-testing
    Mar 10, 2022 6:49:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-09_22_49_28-3278606628388839370
    Mar 10, 2022 6:49:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-09_22_49_28-3278606628388839370
    Mar 10, 2022 6:49:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-10T06:49:29.536Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 10, 2022 6:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:49:39.499Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 10, 2022 6:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:49:40.241Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 10, 2022 6:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:49:40.299Z: Expanding GroupByKey operations into optimizable parts.
    Mar 10, 2022 6:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:49:40.347Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 10, 2022 6:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:49:40.423Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 10, 2022 6:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:49:40.456Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 10, 2022 6:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:49:40.498Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 10, 2022 6:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:49:41.094Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 10, 2022 6:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:49:41.199Z: Starting 5 workers in us-central1-a...
    Mar 10, 2022 6:50:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:49:59.987Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 10, 2022 6:50:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:50:25.602Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Mar 10, 2022 6:50:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:50:25.629Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Mar 10, 2022 6:50:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:50:35.999Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 10, 2022 6:50:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:50:49.569Z: Workers have started successfully.
    Mar 10, 2022 6:50:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:50:49.599Z: Workers have started successfully.
    Mar 10, 2022 6:51:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:51:25.425Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 10, 2022 6:51:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:51:25.599Z: Cleaning up.
    Mar 10, 2022 6:51:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:51:25.671Z: Stopping worker pool...
    Mar 10, 2022 6:53:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:53:52.717Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 10, 2022 6:53:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T06:53:52.770Z: Worker pool stopped.
    Mar 10, 2022 6:54:11 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-09_22_49_28-3278606628388839370 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): becf5c9e-0960-4a10-8c94-d654a614fe14 and timestamp: 2022-03-10T06:54:11.646000000Z:
                     Metric:                    Value:
                   read_time                     8.703
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 10, 2022 6:54:12 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.106 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.372 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 5 mins 55.786 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 9m 15s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/5og5aroqe6eey

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3119

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3119/display/redirect?page=changes>

Changes:

[hengfeng] [BEAM-12164]: display the metadata table's name on UI

[noreply] Merge pull request #16844 from [BEAM-12164]: allow for nanosecond

[noreply] [BEAM-13904] Increase unit testing in the reflectx package (#17024)


------------------------------------------
[...truncated 355.14 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8ca33acd61bf1c717a72ab0c57d74e70
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 10, 2022 12:50:57 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 10, 2022 12:50:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 10, 2022 12:50:59 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 371 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 10, 2022 12:51:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 10, 2022 12:51:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 10, 2022 12:51:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 10, 2022 12:51:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 10, 2022 12:51:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 10, 2022 12:51:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 10, 2022 12:51:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1293369737]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 10, 2022 12:51:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 10, 2022 12:51:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 10, 2022 12:51:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 10, 2022 12:51:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 10, 2022 12:51:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 10, 2022 12:51:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 10, 2022 12:51:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@15076442]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 10, 2022 12:51:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 10, 2022 12:51:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 10, 2022 12:51:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 10, 2022 12:51:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 10, 2022 12:51:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 10, 2022 12:51:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 10, 2022 12:51:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 10, 2022 12:51:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 10, 2022 12:51:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 10, 2022 12:51:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 372 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 10, 2022 12:51:10 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 10, 2022 12:51:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4210298638008994578.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-yqLs0MIFKBGvgDbozPYxuBIAa4JSVb0GIoyvqO_Jy2o.jar
    Mar 10, 2022 12:51:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 371 files cached, 1 files newly uploaded in 0 seconds
    Mar 10, 2022 12:51:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 10, 2022 12:51:11 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145669 bytes, hash c6cc6a9bdbf5163ab56bc443dd3cccb114d33679c1071f22f5b270bfe460bff4> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-xsxqm9v1Fjq1a8RD3TzMsRTTNnnBBx8i9bJwv-Rgv_Q.pb
    Mar 10, 2022 12:51:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 10, 2022 12:51:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 10, 2022 12:51:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 10, 2022 12:51:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 10, 2022 12:51:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-09_16_51_14-17299658191102058301?project=apache-beam-testing
    Mar 10, 2022 12:51:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-09_16_51_14-17299658191102058301
    Mar 10, 2022 12:51:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-09_16_51_14-17299658191102058301
    Mar 10, 2022 12:51:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-10T00:51:18.950Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 10, 2022 12:51:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:51:35.456Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 10, 2022 12:51:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:51:37.363Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 10, 2022 12:51:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:51:37.406Z: Expanding GroupByKey operations into optimizable parts.
    Mar 10, 2022 12:51:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:51:37.434Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 10, 2022 12:51:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:51:37.511Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 10, 2022 12:51:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:51:37.550Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 10, 2022 12:51:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:51:37.576Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 10, 2022 12:51:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:51:37.975Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 10, 2022 12:51:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:51:38.095Z: Starting 5 workers in us-central1-a...
    Mar 10, 2022 12:51:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:51:42.939Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 10, 2022 12:52:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:52:24.484Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 10, 2022 12:52:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:52:48.751Z: Workers have started successfully.
    Mar 10, 2022 12:52:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:52:48.783Z: Workers have started successfully.
    Mar 10, 2022 12:53:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:53:18.868Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 10, 2022 12:53:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:53:19.048Z: Cleaning up.
    Mar 10, 2022 12:53:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:53:19.198Z: Stopping worker pool...
    Mar 10, 2022 12:55:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:55:52.424Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 10, 2022 12:55:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-10T00:55:52.469Z: Worker pool stopped.
    Mar 10, 2022 12:55:59 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-09_16_51_14-17299658191102058301 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b52f1f30-79c4-4e8e-a691-4ee6e8e4e17a and timestamp: 2022-03-10T00:55:59.659000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.918

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 10, 2022 12:55:59 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 5 mins 5.94 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 5s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/6p47huknjfnsi

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3118

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3118/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #17036 from [BEAM-12164] Convert all static instances

[noreply] fix variable reference (#16991)


------------------------------------------
[...truncated 356.90 KB...]
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 09, 2022 7:22:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 09, 2022 7:22:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 7:22:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 09, 2022 7:22:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 09, 2022 7:22:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 7:22:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 09, 2022 7:22:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1246215324]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 09, 2022 7:22:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 09, 2022 7:22:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 7:22:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 09, 2022 7:22:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 09, 2022 7:22:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 7:22:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 09, 2022 7:22:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 09, 2022 7:22:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 09, 2022 7:22:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 09, 2022 7:22:28 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 371 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 09, 2022 7:22:29 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 09, 2022 7:22:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1019022335408950835.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-TSmwamlqcXlRp2RHUUx6IN7jhtxOYgPK4HX-lY8Gt3g.jar
    Mar 09, 2022 7:22:37 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 370 files cached, 1 files newly uploaded in 9 seconds
    Mar 09, 2022 7:22:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 09, 2022 7:22:38 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145319 bytes, hash d5201f0a01f0116a1c1a2d8e6f08cbb47a83f5e9ff03c714c3786d7669550666> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-1SAfCgHwEWocGi2ObwjLtHqD9en_A8cUw3htdmlVBmY.pb
    Mar 09, 2022 7:22:45 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 09, 2022 7:22:45 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 09, 2022 7:22:45 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 09, 2022 7:22:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 09, 2022 7:22:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-09_11_22_46-4915706884102809943?project=apache-beam-testing
    Mar 09, 2022 7:22:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-09_11_22_46-4915706884102809943
    Mar 09, 2022 7:22:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-09_11_22_46-4915706884102809943
    Mar 09, 2022 7:22:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-09T19:22:47.578Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 09, 2022 7:23:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:23:04.814Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 09, 2022 7:23:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:23:05.531Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 09, 2022 7:23:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:23:05.574Z: Expanding GroupByKey operations into optimizable parts.
    Mar 09, 2022 7:23:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:23:05.608Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 09, 2022 7:23:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:23:05.700Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 09, 2022 7:23:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:23:05.728Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 09, 2022 7:23:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:23:05.765Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 09, 2022 7:23:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:23:06.176Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 09, 2022 7:23:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:23:06.260Z: Starting 5 workers in us-central1-f...
    Mar 09, 2022 7:23:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:23:34.484Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 09, 2022 7:23:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:23:52.475Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 09, 2022 7:24:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:24:19.225Z: Workers have started successfully.
    Mar 09, 2022 7:24:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:24:19.258Z: Workers have started successfully.
    Mar 09, 2022 7:24:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-09T19:24:56.910Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGpTaTZZa3BzbUowdxoCamQaAmly/streams/CAgaAmpkGgJpciCn--PjBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGpTaTZZa3BzbUowdxoCamQaAmly/streams/CAgaAmpkGgJpciCn--PjBygC': offset 64924 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGpTaTZZa3BzbUowdxoCamQaAmly/streams/CAgaAmpkGgJpciCn--PjBygC': offset 64924 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Mar 09, 2022 7:25:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:25:03.450Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 09, 2022 7:25:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:25:03.637Z: Cleaning up.
    Mar 09, 2022 7:25:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:25:03.748Z: Stopping worker pool...
    Mar 09, 2022 7:27:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:27:24.339Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 09, 2022 7:27:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T19:27:24.401Z: Worker pool stopped.
    Mar 09, 2022 7:27:29 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-09_11_22_46-4915706884102809943 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e65bc940-0bcc-44e0-abd3-29f0c51351f4 and timestamp: 2022-03-09T19:27:30.123000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    14.101

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 09, 2022 7:27:30 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.064 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.093 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 5 mins 46.406 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 12s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/psv6tcmcfxlta

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3117

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3117/display/redirect>

Changes:


------------------------------------------
[...truncated 348.11 KB...]

Gradle Test Executor 160 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 09, 2022 12:46:11 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 09, 2022 12:46:13 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 09, 2022 12:46:15 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 370 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 09, 2022 12:46:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 09, 2022 12:46:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 12:46:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 09, 2022 12:46:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 09, 2022 12:46:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 12:46:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 09, 2022 12:46:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@447899837]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 09, 2022 12:46:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 09, 2022 12:46:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 12:46:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 09, 2022 12:46:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 09, 2022 12:46:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 12:46:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 09, 2022 12:46:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@484878781]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 09, 2022 12:46:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 09, 2022 12:46:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 12:46:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 09, 2022 12:46:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 09, 2022 12:46:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 12:46:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 09, 2022 12:46:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 09, 2022 12:46:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 09, 2022 12:46:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 09, 2022 12:46:36 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 371 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 09, 2022 12:46:36 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 09, 2022 12:46:37 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6229898027270000058.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SnPBvYAsS4k_2O5zJbs_WXFEOz64OdmeY-jnJCmeMyk.jar
    Mar 09, 2022 12:46:40 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 370 files cached, 1 files newly uploaded in 3 seconds
    Mar 09, 2022 12:46:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 09, 2022 12:46:40 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145321 bytes, hash 15e5d08cb2c0f3cf34ce013b2e6edecd582a17ed2753aa39b228c052c113efeb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-FeXQjLLA8880zgE7Lm7ezVgqF-0nU6o5sijAUsET7-s.pb
    Mar 09, 2022 12:46:45 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 09, 2022 12:46:45 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 09, 2022 12:46:45 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 09, 2022 12:46:45 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 09, 2022 12:46:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-09_04_46_45-15925547151402495373?project=apache-beam-testing
    Mar 09, 2022 12:46:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-09_04_46_45-15925547151402495373
    Mar 09, 2022 12:46:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-09_04_46_45-15925547151402495373
    Mar 09, 2022 12:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-09T12:46:47.021Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 09, 2022 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:46:56.177Z: Worker configuration: e2-standard-2 in us-central1-c.
    Mar 09, 2022 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:46:56.943Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 09, 2022 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:46:56.982Z: Expanding GroupByKey operations into optimizable parts.
    Mar 09, 2022 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:46:57.011Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 09, 2022 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:46:57.087Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 09, 2022 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:46:57.113Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 09, 2022 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:46:57.139Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 09, 2022 12:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:46:57.502Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 09, 2022 12:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:46:57.598Z: Starting 5 workers in us-central1-c...
    Mar 09, 2022 12:47:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:47:20.740Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 09, 2022 12:47:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:47:31.094Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Mar 09, 2022 12:47:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:47:31.164Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Mar 09, 2022 12:47:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:47:41.491Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 09, 2022 12:48:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:48:06.630Z: Workers have started successfully.
    Mar 09, 2022 12:48:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:48:06.686Z: Workers have started successfully.
    Mar 09, 2022 12:48:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:48:43.528Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 09, 2022 12:48:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:48:43.728Z: Cleaning up.
    Mar 09, 2022 12:48:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:48:43.797Z: Stopping worker pool...
    Mar 09, 2022 12:51:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:51:02.925Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 09, 2022 12:51:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T12:51:02.984Z: Worker pool stopped.
    Mar 09, 2022 12:51:09 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-09_04_46_45-15925547151402495373 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5049f6ac-b762-4442-86a6-2fc183330872 and timestamp: 2022-03-09T12:51:09.085000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.884

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 09, 2022 12:51:09 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 160 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.001 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.051 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 5 mins 6.628 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/cnfsstzrp55do

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3116

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3116/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #16976 from [BEAM-14010] [Website] Add Playground

[noreply] [BEAM-12447] Upgrade cloud build client and add/cleanup options (#17032)


------------------------------------------
[...truncated 347.31 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 09, 2022 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 09, 2022 6:44:57 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 09, 2022 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 370 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 09, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 09, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 09, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 09, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 09, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1857031358]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 09, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 09, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 09, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 09, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 09, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@604047477]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 09, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 09, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 09, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 09, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 09, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 09, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 09, 2022 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 09, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 371 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 09, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 09, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1520094494100860161.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ov2nGmwABbgBNFbEKajXF4EhTPN0Onfxs8iRRUcJChg.jar
    Mar 09, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 370 files cached, 1 files newly uploaded in 0 seconds
    Mar 09, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 09, 2022 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145321 bytes, hash a0056d8dcd0451df37632d296eb657335d7a3cfc11324f3e2b54b7213e663c2e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-oAVtjc0EUd83Yy0pbrZXM116PPwRMk8-K1S3IT5mPC4.pb
    Mar 09, 2022 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 09, 2022 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 09, 2022 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 09, 2022 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 09, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-08_22_45_13-18027099499161680012?project=apache-beam-testing
    Mar 09, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-08_22_45_13-18027099499161680012
    Mar 09, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-08_22_45_13-18027099499161680012
    Mar 09, 2022 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-09T06:45:14.891Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 09, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:45:24.264Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 09, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:45:25.125Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 09, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:45:25.189Z: Expanding GroupByKey operations into optimizable parts.
    Mar 09, 2022 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:45:25.252Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 09, 2022 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:45:25.347Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 09, 2022 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:45:25.372Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 09, 2022 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:45:25.426Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 09, 2022 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:45:25.934Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 09, 2022 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:45:26.007Z: Starting 5 workers in us-central1-b...
    Mar 09, 2022 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:45:46.389Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 09, 2022 6:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:46:06.611Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Mar 09, 2022 6:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:46:06.638Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Mar 09, 2022 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:46:16.974Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 09, 2022 6:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:46:41.463Z: Workers have started successfully.
    Mar 09, 2022 6:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:46:41.483Z: Workers have started successfully.
    Mar 09, 2022 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:47:14.755Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 09, 2022 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:47:14.957Z: Cleaning up.
    Mar 09, 2022 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:47:15.053Z: Stopping worker pool...
    Mar 09, 2022 6:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:50:02.712Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 09, 2022 6:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T06:50:02.763Z: Worker pool stopped.
    Mar 09, 2022 6:50:08 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-08_22_45_13-18027099499161680012 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d3cc0837-ec4a-40f8-9ccd-0b879bc44d67 and timestamp: 2022-03-09T06:50:08.611000000Z:
                     Metric:                    Value:
                   read_time                     8.567
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 09, 2022 6:50:08 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 16.492 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 46s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ckhyxtjxq2ifw

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3115

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3115/display/redirect?page=changes>

Changes:

[noreply] [BEAM-10976] Bundle finalization: Harness and some exec changes (#16980)


------------------------------------------
[...truncated 350.87 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 09, 2022 12:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 09, 2022 12:45:29 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 09, 2022 12:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 370 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 09, 2022 12:45:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 09, 2022 12:45:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 12:45:39 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 09, 2022 12:45:39 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 09, 2022 12:45:39 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 12:45:40 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 09, 2022 12:45:40 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@447899837]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 09, 2022 12:45:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 09, 2022 12:45:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 12:45:41 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 09, 2022 12:45:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 09, 2022 12:45:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 12:45:41 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 09, 2022 12:45:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1246215324]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 09, 2022 12:45:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 09, 2022 12:45:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 12:45:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 09, 2022 12:45:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 09, 2022 12:45:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 09, 2022 12:45:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 09, 2022 12:45:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 09, 2022 12:45:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 09, 2022 12:45:43 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 09, 2022 12:45:49 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 371 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 09, 2022 12:45:49 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 09, 2022 12:45:50 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2621542026693468388.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-P-64Y0gtt2VaJG-kOCwvL1JZkiHZMhVGL0vvVNfvd_Q.jar
    Mar 09, 2022 12:45:50 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 370 files cached, 1 files newly uploaded in 0 seconds
    Mar 09, 2022 12:45:50 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 09, 2022 12:45:50 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145321 bytes, hash 3b76690c191583700e1a74eab718639296e769bd015b2cd709d2c1770d184908> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-O3ZpDBkVg3AOGnTqtxhjkpbnab0BWyzXCdLBdw0YSQg.pb
    Mar 09, 2022 12:45:54 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 09, 2022 12:45:54 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 09, 2022 12:45:54 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 09, 2022 12:45:55 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 09, 2022 12:45:55 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-08_16_45_55-15308047562853932407?project=apache-beam-testing
    Mar 09, 2022 12:45:55 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-08_16_45_55-15308047562853932407
    Mar 09, 2022 12:45:55 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-08_16_45_55-15308047562853932407
    Mar 09, 2022 12:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-09T00:46:04.728Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 09, 2022 12:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:46:22.484Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 09, 2022 12:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:46:23.389Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 09, 2022 12:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:46:23.432Z: Expanding GroupByKey operations into optimizable parts.
    Mar 09, 2022 12:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:46:23.460Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 09, 2022 12:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:46:23.555Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 09, 2022 12:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:46:23.591Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 09, 2022 12:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:46:23.624Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 09, 2022 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:46:24.024Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 09, 2022 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:46:24.141Z: Starting 5 workers in us-central1-f...
    Mar 09, 2022 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:46:27.472Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 09, 2022 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:47:01.038Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Mar 09, 2022 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:47:01.116Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Mar 09, 2022 12:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:47:11.503Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 09, 2022 12:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:47:36.869Z: Workers have started successfully.
    Mar 09, 2022 12:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:47:36.916Z: Workers have started successfully.
    Mar 09, 2022 12:48:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:48:11.379Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 09, 2022 12:48:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:48:11.564Z: Cleaning up.
    Mar 09, 2022 12:48:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:48:11.650Z: Stopping worker pool...
    Mar 09, 2022 12:50:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:50:35.280Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 09, 2022 12:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-09T00:50:35.333Z: Worker pool stopped.
    Mar 09, 2022 12:50:40 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-08_16_45_55-15308047562853932407 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d3844ba1-df36-4da9-97d9-c4b5aa9cf954 and timestamp: 2022-03-09T00:50:41.020000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.136

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 09, 2022 12:50:41 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.043 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 5 mins 19.91 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 19s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/fcvjkhdadlfra

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3114

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3114/display/redirect?page=changes>

Changes:

[Robert Bradshaw] Update dataflow API client.

[Robert Bradshaw] Instructions for updating apitools generated files.

[noreply] Merge pull request #17027: [BEAM-11205] Upgrade GCP Libraries BOM

[noreply] [BEAM-13709] Inconsistent behavior when parsing boolean flags across


------------------------------------------
[...truncated 348.35 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0291d09034a4c3a6e94433e5ef820c3c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 08, 2022 6:57:48 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 08, 2022 6:57:48 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 08, 2022 6:57:49 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 370 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 08, 2022 6:57:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 6:57:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 6:57:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 08, 2022 6:57:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 6:57:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 6:57:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 08, 2022 6:57:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1857031358]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 08, 2022 6:57:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 08, 2022 6:57:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 6:57:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 08, 2022 6:57:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 08, 2022 6:57:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 6:57:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 08, 2022 6:57:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@604047477]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 08, 2022 6:57:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 6:57:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 6:57:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 08, 2022 6:57:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 6:57:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 6:57:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 08, 2022 6:57:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 08, 2022 6:57:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 08, 2022 6:57:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 08, 2022 6:57:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 371 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 08, 2022 6:57:59 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3642634983823389643.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-9ogK11fiiwwzTjCkaImrXxUhEplzWgfK0WUGogaupAQ.jar
    Mar 08, 2022 6:57:59 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 08, 2022 6:58:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 370 files cached, 1 files newly uploaded in 0 seconds
    Mar 08, 2022 6:58:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 08, 2022 6:58:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <145321 bytes, hash fe9985d841cd9df4ebecbe7cf88f808035cc1c4dddc5f32c25db20e77982cf68> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-_pmF2EHNnfTr7L58-I-AgDXMHE3dxfMsJdsg53mCz2g.pb
    Mar 08, 2022 6:58:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 08, 2022 6:58:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 08, 2022 6:58:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 08, 2022 6:58:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 08, 2022 6:58:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-08_10_58_04-16868219395086010898?project=apache-beam-testing
    Mar 08, 2022 6:58:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-08_10_58_04-16868219395086010898
    Mar 08, 2022 6:58:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-08_10_58_04-16868219395086010898
    Mar 08, 2022 6:58:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-08T18:58:05.621Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 08, 2022 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T18:58:15.690Z: Worker configuration: e2-standard-2 in us-central1-c.
    Mar 08, 2022 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T18:58:16.378Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 08, 2022 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T18:58:16.416Z: Expanding GroupByKey operations into optimizable parts.
    Mar 08, 2022 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T18:58:16.446Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 08, 2022 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T18:58:16.509Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 08, 2022 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T18:58:16.543Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 08, 2022 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T18:58:16.570Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 08, 2022 6:58:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T18:58:16.937Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 08, 2022 6:58:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T18:58:17.035Z: Starting 5 workers in us-central1-c...
    Mar 08, 2022 6:58:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T18:58:36.619Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 08, 2022 6:59:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T18:59:01.464Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 08, 2022 6:59:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T18:59:27.641Z: Workers have started successfully.
    Mar 08, 2022 6:59:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T18:59:27.676Z: Workers have started successfully.
    Mar 08, 2022 7:00:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T19:00:07.871Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 08, 2022 7:00:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T19:00:08.027Z: Cleaning up.
    Mar 08, 2022 7:00:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T19:00:08.098Z: Stopping worker pool...
    Mar 08, 2022 7:02:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T19:02:34.597Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 08, 2022 7:02:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T19:02:34.647Z: Worker pool stopped.
    Mar 08, 2022 7:02:40 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-08_10_58_04-16868219395086010898 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 512835da-709f-444f-9db3-334e54eb4f98 and timestamp: 2022-03-08T19:02:40.660000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.067

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 08, 2022 7:02:40 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 56.26 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 43s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/spzje7pqpitb4

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3113

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3113/display/redirect>

Changes:


------------------------------------------
[...truncated 350.63 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is cf54eb05da3c793120eb9065d2c8146f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 8'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 8'
Successfully started process 'Gradle Test Executor 8'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 08, 2022 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 08, 2022 12:45:12 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 08, 2022 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 08, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 08, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 08, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 08, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 08, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 08, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 08, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 08, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@646482993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 08, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 08, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 08, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 08, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 08, 2022 12:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 08, 2022 12:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 08, 2022 12:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 08, 2022 12:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1819004653111321023.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-bFNPOi2GHPh94NpB-oKcgttZQ_2w5eLnaGC8OJ0pEhA.jar
    Mar 08, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 08, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 08, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash f77c9fda0feba21b9b2ac6684d9ab23d1ba2c6b308a6b52fcee23e312867979c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-93yf2g_rohubKsZoTZqyPRuixrMIprUvzuI-MShnl5w.pb
    Mar 08, 2022 12:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 08, 2022 12:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 08, 2022 12:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 08, 2022 12:45:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 08, 2022 12:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-08_04_45_28-1312014120172220489?project=apache-beam-testing
    Mar 08, 2022 12:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-08_04_45_28-1312014120172220489
    Mar 08, 2022 12:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-08_04_45_28-1312014120172220489
    Mar 08, 2022 12:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-08T12:45:29.812Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 08, 2022 12:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:45:39.894Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 08, 2022 12:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:45:40.630Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 08, 2022 12:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:45:40.692Z: Expanding GroupByKey operations into optimizable parts.
    Mar 08, 2022 12:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:45:40.762Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 08, 2022 12:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:45:40.833Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 08, 2022 12:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:45:40.860Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 08, 2022 12:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:45:40.891Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 08, 2022 12:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:45:41.302Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 08, 2022 12:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:45:41.390Z: Starting 5 workers in us-central1-a...
    Mar 08, 2022 12:46:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:46:12.400Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 08, 2022 12:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:46:24.304Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 08, 2022 12:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:46:47.451Z: Workers have started successfully.
    Mar 08, 2022 12:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:46:47.496Z: Workers have started successfully.
    Mar 08, 2022 12:47:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:47:21.562Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 08, 2022 12:47:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:47:21.793Z: Cleaning up.
    Mar 08, 2022 12:47:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:47:21.868Z: Stopping worker pool...
    Mar 08, 2022 12:49:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:49:50.238Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 08, 2022 12:49:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T12:49:50.285Z: Worker pool stopped.
    Mar 08, 2022 12:49:57 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-08_04_45_28-1312014120172220489 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9d65c896-9d56-468d-8264-794fdb22e487 and timestamp: 2022-03-08T12:49:57.158000000Z:
                     Metric:                    Value:
                   read_time                    10.801
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 08, 2022 12:49:57 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 8 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.01 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 49.499 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 36s
165 actionable tasks: 105 executed, 58 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/aml2bfg5hhoju

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3112

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3112/display/redirect?page=changes>

Changes:

[jrmccluskey] [BEAM-14050] Update taxi.go example instructions

[noreply] [BEAM-13909] improve coverage of Provision package (#17014)


------------------------------------------
[...truncated 352.87 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is cf54eb05da3c793120eb9065d2c8146f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 08, 2022 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 08, 2022 6:45:16 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 08, 2022 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 08, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 08, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 08, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 08, 2022 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 08, 2022 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 08, 2022 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 08, 2022 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 08, 2022 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1044945601]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 08, 2022 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 08, 2022 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 08, 2022 6:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 08, 2022 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 08, 2022 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 08, 2022 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 08, 2022 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 08, 2022 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test22409967936446598.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-nDdgpNlUGAaJqDralYDFWyApaNSm5V9X954bn_NcB2g.jar
    Mar 08, 2022 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/jakarta.xml.bind/jakarta.xml.bind-api/2.3.3/48e3b9cfc10752fba3521d6511f4165bea951801/jakarta.xml.bind-api-2.3.3.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jakarta.xml.bind-api-2.3.3-wEU59HLppt0MdoXqgtZ3KCJpq457rKLhRQDjgeDGzsU.jar
    Mar 08, 2022 6:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 2 files newly uploaded in 0 seconds
    Mar 08, 2022 6:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 08, 2022 6:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash d6ed273fbbe429797a52c59546dde870d8ee87c58a3368d99bf6aa5e0bb258b6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-1u0nP7vkKXl6UsWVRt3ocNjuh8WKM2jZm_aqXguyWLY.pb
    Mar 08, 2022 6:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 08, 2022 6:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 08, 2022 6:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 08, 2022 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 08, 2022 6:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-07_22_45_31-1864647736937242156?project=apache-beam-testing
    Mar 08, 2022 6:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-07_22_45_31-1864647736937242156
    Mar 08, 2022 6:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-07_22_45_31-1864647736937242156
    Mar 08, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-08T06:45:33.255Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 08, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:45:41.856Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 08, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:45:42.649Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 08, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:45:42.688Z: Expanding GroupByKey operations into optimizable parts.
    Mar 08, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:45:42.717Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 08, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:45:42.785Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 08, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:45:42.812Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 08, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:45:42.833Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 08, 2022 6:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:45:43.163Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 08, 2022 6:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:45:43.241Z: Starting 5 workers in us-central1-f...
    Mar 08, 2022 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:45:58.751Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 08, 2022 6:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:46:36.882Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 08, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:47:01.704Z: Workers have started successfully.
    Mar 08, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:47:01.732Z: Workers have started successfully.
    Mar 08, 2022 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:47:37.406Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 08, 2022 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:47:37.567Z: Cleaning up.
    Mar 08, 2022 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:47:37.639Z: Stopping worker pool...
    Mar 08, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:50:09.398Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 08, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T06:50:09.488Z: Worker pool stopped.
    Mar 08, 2022 6:50:16 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-07_22_45_31-1864647736937242156 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 24f3cf09-bc2e-4b32-8ea7-2fe5227c4a66 and timestamp: 2022-03-08T06:50:16.173000000Z:
                     Metric:                    Value:
                   read_time                     9.506
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 08, 2022 6:50:16 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 3.898 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 54s
165 actionable tasks: 106 executed, 57 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/3fmzogdgid2j6

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3111

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3111/display/redirect?page=changes>

Changes:

[dannymccormick] [BEAM-11085] Test that windows are correctly observed in DoFns

[noreply] Give pr bot write permissions on pr update

[noreply] Adding a logical type for Schemas using proto serialization. (#16940)

[noreply] BEAM-13765 missing PAssert methods (#16668)


------------------------------------------
[...truncated 360.70 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is cf54eb05da3c793120eb9065d2c8146f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 08, 2022 1:08:25 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 08, 2022 1:08:26 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 08, 2022 1:08:27 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 08, 2022 1:08:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 1:08:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 1:08:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 08, 2022 1:08:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 1:08:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 1:08:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 08, 2022 1:08:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 08, 2022 1:08:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 08, 2022 1:08:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 1:08:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 08, 2022 1:08:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 08, 2022 1:08:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 1:08:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 08, 2022 1:08:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@646482993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 08, 2022 1:08:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 1:08:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 1:08:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 08, 2022 1:08:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 08, 2022 1:08:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 08, 2022 1:08:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 08, 2022 1:08:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 08, 2022 1:08:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 08, 2022 1:08:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 08, 2022 1:08:37 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 08, 2022 1:08:37 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test726033699541675377.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-FfyFo0U-ieCXRevp0TJYoGGN8KOuGM788ffsmLsU0EE.jar
    Mar 08, 2022 1:08:37 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 08, 2022 1:08:39 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 1 seconds
    Mar 08, 2022 1:08:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 08, 2022 1:08:39 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash 2857cdf4fcadcc34bb77dd9f51b6602f55ec9a91614ad94cc0fc546cb451ebb3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-KFfN9PytzDS7d92fUbZgL1XsmpFhStlMwPxUbLRR67M.pb
    Mar 08, 2022 1:08:42 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 08, 2022 1:08:42 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 08, 2022 1:08:42 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 08, 2022 1:08:42 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 08, 2022 1:08:43 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-07_17_08_42-11825246350605966308?project=apache-beam-testing
    Mar 08, 2022 1:08:43 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-07_17_08_42-11825246350605966308
    Mar 08, 2022 1:08:43 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-07_17_08_42-11825246350605966308
    Mar 08, 2022 1:08:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-08T01:08:44.457Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 08, 2022 1:08:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:08:56.096Z: Worker configuration: e2-standard-2 in us-central1-c.
    Mar 08, 2022 1:08:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:08:57.069Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 08, 2022 1:08:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:08:57.100Z: Expanding GroupByKey operations into optimizable parts.
    Mar 08, 2022 1:08:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:08:57.139Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 08, 2022 1:08:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:08:57.220Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 08, 2022 1:08:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:08:57.256Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 08, 2022 1:08:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:08:57.291Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 08, 2022 1:08:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:08:57.662Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 08, 2022 1:08:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:08:57.745Z: Starting 5 workers in us-central1-c...
    Mar 08, 2022 1:09:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:09:19.049Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 08, 2022 1:09:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:09:34.631Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 08, 2022 1:10:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:10:02.278Z: Workers have started successfully.
    Mar 08, 2022 1:10:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:10:02.318Z: Workers have started successfully.
    Mar 08, 2022 1:10:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:10:40.586Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 08, 2022 1:10:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:10:40.743Z: Cleaning up.
    Mar 08, 2022 1:10:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:10:40.829Z: Stopping worker pool...
    Mar 08, 2022 1:13:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:13:06.324Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 08, 2022 1:13:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-08T01:13:06.395Z: Worker pool stopped.
    Mar 08, 2022 1:13:12 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-07_17_08_42-11825246350605966308 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): be9f2720-bb65-4725-832a-5aa8f1a10435 and timestamp: 2022-03-08T01:13:12.997000000Z:
                     Metric:                    Value:
                   read_time                     8.573
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 08, 2022 1:13:13 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 50.644 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 43s
165 actionable tasks: 109 executed, 54 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/esofpzfl3ozda

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3110

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3110/display/redirect>

Changes:


------------------------------------------
[...truncated 349.56 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 07, 2022 6:53:43 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 07, 2022 6:53:44 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 07, 2022 6:53:44 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 07, 2022 6:53:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 6:53:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 6:53:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 07, 2022 6:53:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 6:53:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 6:53:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 07, 2022 6:53:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 07, 2022 6:53:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 07, 2022 6:53:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 6:53:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 07, 2022 6:53:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 07, 2022 6:53:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 6:53:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 07, 2022 6:53:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@646482993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 07, 2022 6:53:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 6:53:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 6:53:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 07, 2022 6:53:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 6:53:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 6:53:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 07, 2022 6:53:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 07, 2022 6:53:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 07, 2022 6:53:51 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 07, 2022 6:53:55 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 07, 2022 6:53:56 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 07, 2022 6:53:56 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3067716549312837561.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-8r7NVNxp092_Bu1mhR9bBsAgjSLsO_ixsCk0VihFthA.jar
    Mar 07, 2022 6:54:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 5 seconds
    Mar 07, 2022 6:54:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 07, 2022 6:54:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash aadcf829c0297410d1b4e3ce63b2af24b899fb9a3ad479d3940b43a6a6e6e395> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-qtz4KcApdBDRtOPOY7KvJLiZ-5o61HnTlAtDpqbm45U.pb
    Mar 07, 2022 6:54:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 07, 2022 6:54:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 07, 2022 6:54:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 07, 2022 6:54:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 07, 2022 6:54:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-07_10_54_05-14126105918211669964?project=apache-beam-testing
    Mar 07, 2022 6:54:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-07_10_54_05-14126105918211669964
    Mar 07, 2022 6:54:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-07_10_54_05-14126105918211669964
    Mar 07, 2022 6:54:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-07T18:54:06.253Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 07, 2022 6:54:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:54:15.700Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 07, 2022 6:54:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:54:16.734Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 07, 2022 6:54:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:54:16.786Z: Expanding GroupByKey operations into optimizable parts.
    Mar 07, 2022 6:54:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:54:16.815Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 07, 2022 6:54:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:54:16.888Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 07, 2022 6:54:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:54:16.926Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 07, 2022 6:54:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:54:16.953Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 07, 2022 6:54:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:54:17.455Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 07, 2022 6:54:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:54:17.545Z: Starting 5 workers in us-central1-b...
    Mar 07, 2022 6:54:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:54:20.732Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 07, 2022 6:54:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:54:58.376Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Mar 07, 2022 6:54:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:54:58.413Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Mar 07, 2022 6:55:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:55:08.803Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 07, 2022 6:55:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:55:32.228Z: Workers have started successfully.
    Mar 07, 2022 6:55:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:55:32.270Z: Workers have started successfully.
    Mar 07, 2022 6:56:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:56:07.444Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 07, 2022 6:56:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:56:07.696Z: Cleaning up.
    Mar 07, 2022 6:56:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:56:07.811Z: Stopping worker pool...
    Mar 07, 2022 6:58:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:58:36.216Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 07, 2022 6:58:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T18:58:36.320Z: Worker pool stopped.
    Mar 07, 2022 6:58:42 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-07_10_54_05-14126105918211669964 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b8ecf6b0-e375-4bb8-ac43-dbb12462e1e9 and timestamp: 2022-03-07T18:58:42.851000000Z:
                     Metric:                    Value:
                   read_time                      9.39
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 07, 2022 6:58:42 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 5 mins 3.366 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 53s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/swwkiqaejwnt4

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3109

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3109/display/redirect>

Changes:


------------------------------------------
[...truncated 346.63 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 07, 2022 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 07, 2022 12:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 07, 2022 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 07, 2022 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 07, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 07, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 07, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 07, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 07, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 07, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 07, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@646482993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 07, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 07, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 07, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 07, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 07, 2022 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 07, 2022 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 07, 2022 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 07, 2022 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1080509061062967122.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-vSEZIFK3H_L5n9L1fQ2ZzZK-cS33tMsoZi3Oueoijcw.jar
    Mar 07, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 1 seconds
    Mar 07, 2022 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 07, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash 2fe2e49f2608172ca8333b0816e95b4457f7489a156531bccf84aa70a24b010d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-L-LknyYIFyyoMzsIFulbRFf3SJoVZTG8z4SqcKJLAQ0.pb
    Mar 07, 2022 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 07, 2022 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 07, 2022 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 07, 2022 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 07, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-07_04_45_07-15423191427702930420?project=apache-beam-testing
    Mar 07, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-07_04_45_07-15423191427702930420
    Mar 07, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-07_04_45_07-15423191427702930420
    Mar 07, 2022 12:45:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-07T12:45:09.108Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 07, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:45:21.627Z: Worker configuration: e2-standard-2 in us-central1-c.
    Mar 07, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:45:22.419Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 07, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:45:22.462Z: Expanding GroupByKey operations into optimizable parts.
    Mar 07, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:45:22.503Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 07, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:45:22.580Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 07, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:45:22.600Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 07, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:45:22.621Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 07, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:45:23.107Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 07, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:45:23.215Z: Starting 5 workers in us-central1-c...
    Mar 07, 2022 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:45:43.831Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 07, 2022 12:46:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:46:08.779Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Mar 07, 2022 12:46:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:46:08.818Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Mar 07, 2022 12:46:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:46:19.035Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 07, 2022 12:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:46:40.204Z: Workers have started successfully.
    Mar 07, 2022 12:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:46:40.236Z: Workers have started successfully.
    Mar 07, 2022 12:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:47:18.405Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 07, 2022 12:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:47:18.723Z: Cleaning up.
    Mar 07, 2022 12:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:47:18.824Z: Stopping worker pool...
    Mar 07, 2022 12:49:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:49:57.039Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 07, 2022 12:49:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T12:49:57.093Z: Worker pool stopped.
    Mar 07, 2022 12:50:04 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-07_04_45_07-15423191427702930420 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 28c1e522-d708-4a07-8a4d-21d0026f16a3 and timestamp: 2022-03-07T12:50:04.327000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    13.485

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 07, 2022 12:50:04 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 17.361 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 44s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/k6dnd33i7sevc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3108

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3108/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13925] Add ability to get metrics on pr-bot performance (#16985)


------------------------------------------
[...truncated 346.97 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 07, 2022 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 07, 2022 6:45:27 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 07, 2022 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 07, 2022 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 07, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 07, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 07, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 07, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 07, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 07, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 07, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1044945601]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 07, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 07, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 07, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 07, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 07, 2022 6:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 07, 2022 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 07, 2022 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 07, 2022 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1189710525153762920.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ZAiYT3C4TXQMjBIE2x8HpzzBjAMD3sEEZzZgySMjZt4.jar
    Mar 07, 2022 6:45:38 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 07, 2022 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 07, 2022 6:45:38 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash c251d63c097d6207fb6cc092ba2a16cb0a42127ec64e3bb7fa3e2ca1b39c57ba> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-wlHWPAl9Ygf7bMCSuioWywpCEn7GTju3-j4sobOcV7o.pb
    Mar 07, 2022 6:45:41 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 07, 2022 6:45:41 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 07, 2022 6:45:41 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 07, 2022 6:45:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 07, 2022 6:45:42 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-06_22_45_42-11040233825891357671?project=apache-beam-testing
    Mar 07, 2022 6:45:42 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-06_22_45_42-11040233825891357671
    Mar 07, 2022 6:45:42 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-06_22_45_42-11040233825891357671
    Mar 07, 2022 6:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-07T06:45:43.076Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 07, 2022 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:45:52.335Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 07, 2022 6:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:45:53.097Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 07, 2022 6:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:45:53.137Z: Expanding GroupByKey operations into optimizable parts.
    Mar 07, 2022 6:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:45:53.170Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 07, 2022 6:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:45:53.240Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 07, 2022 6:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:45:53.266Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 07, 2022 6:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:45:53.300Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 07, 2022 6:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:45:53.646Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 07, 2022 6:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:45:53.718Z: Starting 5 workers in us-central1-a...
    Mar 07, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:45:57.351Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 07, 2022 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:46:26.380Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Mar 07, 2022 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:46:26.411Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Mar 07, 2022 6:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:46:36.724Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 07, 2022 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:47:00.783Z: Workers have started successfully.
    Mar 07, 2022 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:47:00.809Z: Workers have started successfully.
    Mar 07, 2022 6:47:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:47:29.339Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 07, 2022 6:47:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:47:29.481Z: Cleaning up.
    Mar 07, 2022 6:47:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:47:29.554Z: Stopping worker pool...
    Mar 07, 2022 6:49:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:49:52.630Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 07, 2022 6:49:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T06:49:52.673Z: Worker pool stopped.
    Mar 07, 2022 6:49:59 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-06_22_45_42-11040233825891357671 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 45f0cdb8-4e10-4847-8753-d45512863cb8 and timestamp: 2022-03-07T06:49:59.079000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.276

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 07, 2022 6:49:59 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 36.398 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/h5v3y7xf275c4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3107

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3107/display/redirect>

Changes:


------------------------------------------
[...truncated 347.83 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 07, 2022 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 07, 2022 12:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 07, 2022 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 07, 2022 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 07, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 07, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 07, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 07, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 07, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 07, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 07, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@646482993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 07, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 07, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 07, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 07, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 07, 2022 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 07, 2022 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 07, 2022 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 07, 2022 12:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 07, 2022 12:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 07, 2022 12:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2624833369018817343.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ijWYcyZAmTqkbrB6xWWDt6C5VGUWjkwHZY84YZsFOmI.jar
    Mar 07, 2022 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 2 seconds
    Mar 07, 2022 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 07, 2022 12:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash 75604a5060891b8cb8c05b79c1192d5db0df724b95199d4dd996a0455ff7bd07> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-dWBKUGCJG4y4wFt5wRktXbDfckuVGZ1N2ZagRV_3vQc.pb
    Mar 07, 2022 12:45:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 07, 2022 12:45:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 07, 2022 12:45:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 07, 2022 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 07, 2022 12:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-06_16_45_14-4959872611628617707?project=apache-beam-testing
    Mar 07, 2022 12:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-06_16_45_14-4959872611628617707
    Mar 07, 2022 12:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-06_16_45_14-4959872611628617707
    Mar 07, 2022 12:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-07T00:45:19.802Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 07, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:45:32.214Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 07, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:45:33.143Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 07, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:45:33.182Z: Expanding GroupByKey operations into optimizable parts.
    Mar 07, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:45:33.208Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 07, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:45:33.278Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 07, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:45:33.314Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 07, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:45:33.346Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 07, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:45:33.759Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 07, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:45:33.836Z: Starting 5 workers in us-central1-a...
    Mar 07, 2022 12:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:46:01.256Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 07, 2022 12:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:46:07.829Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Mar 07, 2022 12:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:46:07.859Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Mar 07, 2022 12:46:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:46:18.298Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 07, 2022 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:46:41.679Z: Workers have started successfully.
    Mar 07, 2022 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:46:41.706Z: Workers have started successfully.
    Mar 07, 2022 12:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:47:24.282Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 07, 2022 12:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:47:24.413Z: Cleaning up.
    Mar 07, 2022 12:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:47:24.485Z: Stopping worker pool...
    Mar 07, 2022 12:49:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:49:45.511Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 07, 2022 12:49:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-07T00:49:45.794Z: Worker pool stopped.
    Mar 07, 2022 12:49:52 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-06_16_45_14-4959872611628617707 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 90e88653-542b-4f9b-8783-8948c853cb06 and timestamp: 2022-03-07T00:49:52.828000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.486

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 07, 2022 12:49:52 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 1.301 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 32s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/es6jwjgyrsxtg

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3106

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3106/display/redirect>

Changes:


------------------------------------------
[...truncated 346.54 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c3aaf440e76b0dc724b1fa4e617de156
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 06, 2022 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 06, 2022 6:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 06, 2022 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 06, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 06, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 06, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 06, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 06, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 06, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 06, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 06, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1044945601]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 06, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 06, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 06, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 06, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 06, 2022 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 06, 2022 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 06, 2022 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 06, 2022 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7852502475321317571.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-sPCcnMJ77ZY3z-UQkAiBSvb5VMUjG6ihpAVn2gjFt6g.jar
    Mar 06, 2022 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 06, 2022 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 06, 2022 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash 7f8f3405db5bb591c73b047ebc1db23e753511d7af6882d5ec34b05a5cff42fb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-f480BdtbtZHHOwR-vB2yPnU1EdevaILV7DSwWlz_Qvs.pb
    Mar 06, 2022 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 06, 2022 6:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 06, 2022 6:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 06, 2022 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 06, 2022 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-06_10_45_12-232131737669653699?project=apache-beam-testing
    Mar 06, 2022 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-06_10_45_12-232131737669653699
    Mar 06, 2022 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-06_10_45_12-232131737669653699
    Mar 06, 2022 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-06T18:45:14.679Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 06, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:45:30.093Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 06, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:45:31.096Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 06, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:45:31.137Z: Expanding GroupByKey operations into optimizable parts.
    Mar 06, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:45:31.165Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 06, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:45:31.236Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 06, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:45:31.258Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 06, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:45:31.290Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 06, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:45:31.779Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 06, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:45:31.842Z: Starting 5 workers in us-central1-f...
    Mar 06, 2022 6:45:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:45:50.228Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 06, 2022 6:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:46:24.189Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 06, 2022 6:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:46:53.014Z: Workers have started successfully.
    Mar 06, 2022 6:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:46:53.048Z: Workers have started successfully.
    Mar 06, 2022 6:47:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:47:26.958Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 06, 2022 6:47:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:47:27.088Z: Cleaning up.
    Mar 06, 2022 6:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:47:27.168Z: Stopping worker pool...
    Mar 06, 2022 6:49:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:49:52.804Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 06, 2022 6:49:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T18:49:52.848Z: Worker pool stopped.
    Mar 06, 2022 6:50:01 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-06_10_45_12-232131737669653699 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4c976c71-f98e-4e6b-910e-39b1ed46f967 and timestamp: 2022-03-06T18:50:01.736000000Z:
                     Metric:                    Value:
                   read_time                     9.619
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 06, 2022 6:50:01 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 12.197 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 42s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/u4lry2qlc3l6m

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3105

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3105/display/redirect>

Changes:


------------------------------------------
[...truncated 371.15 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c3aaf440e76b0dc724b1fa4e617de156
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 06, 2022 2:34:23 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 06, 2022 2:34:24 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 06, 2022 2:34:24 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 06, 2022 2:34:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 2:34:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 2:34:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 06, 2022 2:34:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 2:34:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 2:34:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 06, 2022 2:34:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 06, 2022 2:34:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 06, 2022 2:34:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 2:34:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 06, 2022 2:34:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 06, 2022 2:34:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 2:34:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 06, 2022 2:34:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1044945601]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 06, 2022 2:34:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 2:34:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 2:34:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 06, 2022 2:34:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 2:34:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 2:34:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 06, 2022 2:34:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 06, 2022 2:34:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 06, 2022 2:34:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 06, 2022 2:34:35 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 06, 2022 2:34:35 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 06, 2022 2:34:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1808298691443508938.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-CcMzEBqJhrs2cmkjE4-lh4t7a_dLpF5yAM_VaGzDkkM.jar
    Mar 06, 2022 2:34:38 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 2 seconds
    Mar 06, 2022 2:34:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 06, 2022 2:34:38 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash e6dd1c1396188a8f09a1aafe0b2a77c4729cbced4f6827702201a4600cf23edb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-5t0cE5YYio8Joar-Cyp3xHKcvO1PaCdwIgGkYAzyPts.pb
    Mar 06, 2022 2:34:41 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 06, 2022 2:34:41 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 06, 2022 2:34:41 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 06, 2022 2:34:41 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 06, 2022 2:34:42 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-06_06_34_41-17409619061245119164?project=apache-beam-testing
    Mar 06, 2022 2:34:42 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-06_06_34_41-17409619061245119164
    Mar 06, 2022 2:34:42 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-06_06_34_41-17409619061245119164
    Mar 06, 2022 2:34:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-06T14:34:43.017Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 06, 2022 2:34:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:34:53.760Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 06, 2022 2:34:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:34:54.723Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 06, 2022 2:34:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:34:54.823Z: Expanding GroupByKey operations into optimizable parts.
    Mar 06, 2022 2:34:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:34:54.866Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 06, 2022 2:34:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:34:54.946Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 06, 2022 2:34:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:34:54.993Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 06, 2022 2:34:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:34:55.028Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 06, 2022 2:34:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:34:55.357Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 06, 2022 2:34:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:34:55.429Z: Starting 5 workers in us-central1-a...
    Mar 06, 2022 2:35:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:35:19.850Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 06, 2022 2:35:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:35:39.738Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 06, 2022 2:36:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:36:02.877Z: Workers have started successfully.
    Mar 06, 2022 2:36:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:36:02.915Z: Workers have started successfully.
    Mar 06, 2022 2:36:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:36:34.243Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 06, 2022 2:36:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:36:34.490Z: Cleaning up.
    Mar 06, 2022 2:36:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:36:34.573Z: Stopping worker pool...
    Mar 06, 2022 2:39:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:38:59.862Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 06, 2022 2:39:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T14:38:59.912Z: Worker pool stopped.
    Mar 06, 2022 2:39:05 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-06_06_34_41-17409619061245119164 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 76edda5d-9430-47bf-a681-eab1fba233cd and timestamp: 2022-03-06T14:39:05.220000000Z:
                     Metric:                    Value:
                   read_time                      8.99
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 06, 2022 2:39:05 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 45.788 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 14s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/nzsr27hb7l7s2

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3104

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3104/display/redirect>

Changes:


------------------------------------------
[...truncated 346.72 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c3aaf440e76b0dc724b1fa4e617de156
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 06, 2022 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 06, 2022 6:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 06, 2022 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 06, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 06, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 06, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 06, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 06, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 06, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 06, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 06, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1044945601]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 06, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 06, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 06, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 06, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 06, 2022 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 06, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 06, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 06, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test895757654731591023.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-wsPf-t7htMWhtB9xuls9Iw4pnEvrFp7avR1AiJ4JObo.jar
    Mar 06, 2022 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 06, 2022 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 06, 2022 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash da5a651ca22d8ec9f070b8f5a995b6e6f0c4848b8f03dd1183ac416cf7a4e864> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-2lplHKItjsnwcLj1qZW25vDEhIuPA90Rg6xBbPek6GQ.pb
    Mar 06, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 06, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 06, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 06, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 06, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-05_22_45_09-6948856680416276941?project=apache-beam-testing
    Mar 06, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-05_22_45_09-6948856680416276941
    Mar 06, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-05_22_45_09-6948856680416276941
    Mar 06, 2022 6:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-06T06:45:10.660Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 06, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:45:20.090Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 06, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:45:21.019Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 06, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:45:21.051Z: Expanding GroupByKey operations into optimizable parts.
    Mar 06, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:45:21.088Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 06, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:45:21.162Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 06, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:45:21.192Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 06, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:45:21.230Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 06, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:45:21.533Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 06, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:45:21.609Z: Starting 5 workers in us-central1-a...
    Mar 06, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:45:30.475Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 06, 2022 6:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:46:02.813Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 06, 2022 6:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:46:26.026Z: Workers have started successfully.
    Mar 06, 2022 6:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:46:26.061Z: Workers have started successfully.
    Mar 06, 2022 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:46:57.639Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 06, 2022 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:46:57.797Z: Cleaning up.
    Mar 06, 2022 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:46:57.889Z: Stopping worker pool...
    Mar 06, 2022 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:49:26.223Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 06, 2022 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T06:49:26.298Z: Worker pool stopped.
    Mar 06, 2022 6:49:33 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-05_22_45_09-6948856680416276941 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ae1d26af-c9a2-4b59-b8e3-7a472ba308d7 and timestamp: 2022-03-06T06:49:33.469000000Z:
                     Metric:                    Value:
                   read_time                     7.942
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 06, 2022 6:49:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 45.281 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/yl63mftua6xtw

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3103

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3103/display/redirect>

Changes:


------------------------------------------
[...truncated 346.85 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 06, 2022 12:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 06, 2022 12:44:50 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 06, 2022 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 06, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 06, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 06, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1044945601]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 06, 2022 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 06, 2022 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 06, 2022 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test9195101018391566896.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-rMlaiq0pcqqP9Gk7fkmtlEZjrd2xJzRk-Q3penlAbfc.jar
    Mar 06, 2022 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 06, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 06, 2022 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 06, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144985 bytes, hash 133412a266044a7f60335eb0e92e39db2c5dfe1e206b0f4b4ff3cd9ab86b0469> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-EzQSomYESn9gM16w6S452yxd_h4gaw9LT_PNmrhrBGk.pb
    Mar 06, 2022 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 06, 2022 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 06, 2022 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 06, 2022 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 06, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-05_16_45_05-10509600264972469903?project=apache-beam-testing
    Mar 06, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-05_16_45_05-10509600264972469903
    Mar 06, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-05_16_45_05-10509600264972469903
    Mar 06, 2022 12:45:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-06T00:45:06.899Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 06, 2022 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:45:14.743Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 06, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:45:15.565Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 06, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:45:15.604Z: Expanding GroupByKey operations into optimizable parts.
    Mar 06, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:45:15.630Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 06, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:45:15.703Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 06, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:45:15.741Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 06, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:45:15.773Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 06, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:45:16.082Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 06, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:45:16.153Z: Starting 5 workers in us-central1-f...
    Mar 06, 2022 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:45:29.384Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 06, 2022 12:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:46:07.203Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Mar 06, 2022 12:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:46:07.233Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Mar 06, 2022 12:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:46:17.711Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 06, 2022 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:46:30.519Z: Workers have started successfully.
    Mar 06, 2022 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:46:30.548Z: Workers have started successfully.
    Mar 06, 2022 12:47:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:47:13.635Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 06, 2022 12:47:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:47:13.784Z: Cleaning up.
    Mar 06, 2022 12:47:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:47:13.858Z: Stopping worker pool...
    Mar 06, 2022 12:49:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:49:43.745Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 06, 2022 12:49:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-06T00:49:43.789Z: Worker pool stopped.
    Mar 06, 2022 12:49:48 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-05_16_45_05-10509600264972469903 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f53fd298-8485-421d-b4f9-2697be65d82c and timestamp: 2022-03-06T00:49:48.877000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    13.727

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 06, 2022 12:49:49 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 2.513 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/x25ybihtxve7u

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3102

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3102/display/redirect>

Changes:


------------------------------------------
[...truncated 352.34 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 05, 2022 6:46:34 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 05, 2022 6:46:36 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 05, 2022 6:46:36 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 05, 2022 6:46:40 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 6:46:40 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 6:46:41 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 05, 2022 6:46:41 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 6:46:41 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 6:46:41 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 05, 2022 6:46:41 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 05, 2022 6:46:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 05, 2022 6:46:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 6:46:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 05, 2022 6:46:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 05, 2022 6:46:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 6:46:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 05, 2022 6:46:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1044945601]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 05, 2022 6:46:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 6:46:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 6:46:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 05, 2022 6:46:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 6:46:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 6:46:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 05, 2022 6:46:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 05, 2022 6:46:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 05, 2022 6:46:43 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 05, 2022 6:46:50 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 05, 2022 6:46:51 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 05, 2022 6:46:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2741346445091899055.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-IHJ4rT5DHRz9MJJ_cSsq9f7V8cdl6vQA4C-9WdOxFwI.jar
    Mar 05, 2022 6:46:51 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 05, 2022 6:46:51 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 05, 2022 6:46:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144985 bytes, hash d414c063b14faed1efb1372f2d7f426216b5468b20532fb09137c1a893cbbb2a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-1BTAY7FPrtHvsTcvLX9CYha1RosgUy-wkTfBqJPLuyo.pb
    Mar 05, 2022 6:46:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 05, 2022 6:46:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 05, 2022 6:46:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 05, 2022 6:46:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 05, 2022 6:46:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-05_10_46_55-6792228072573370057?project=apache-beam-testing
    Mar 05, 2022 6:46:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-05_10_46_55-6792228072573370057
    Mar 05, 2022 6:46:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-05_10_46_55-6792228072573370057
    Mar 05, 2022 6:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-05T18:46:56.682Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 05, 2022 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:47:05.345Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 05, 2022 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:47:06.099Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 05, 2022 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:47:06.131Z: Expanding GroupByKey operations into optimizable parts.
    Mar 05, 2022 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:47:06.159Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 05, 2022 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:47:06.224Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 05, 2022 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:47:06.257Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 05, 2022 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:47:06.286Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 05, 2022 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:47:06.621Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 05, 2022 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:47:06.696Z: Starting 5 workers in us-central1-b...
    Mar 05, 2022 6:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:47:30.363Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 05, 2022 6:47:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:47:50.783Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 05, 2022 6:48:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:48:17.606Z: Workers have started successfully.
    Mar 05, 2022 6:48:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:48:17.624Z: Workers have started successfully.
    Mar 05, 2022 6:49:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:49:44.336Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 05, 2022 6:49:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:49:44.509Z: Cleaning up.
    Mar 05, 2022 6:49:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:49:44.582Z: Stopping worker pool...
    Mar 05, 2022 6:52:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:52:13.389Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 05, 2022 6:52:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T18:52:13.451Z: Worker pool stopped.
    Mar 05, 2022 6:52:19 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-05_10_46_55-6792228072573370057 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cdc45afc-a8c2-41e0-877c-ea24b5c63de0 and timestamp: 2022-03-05T18:52:19.314000000Z:
                     Metric:                    Value:
                   read_time                    10.203
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 05, 2022 6:52:19 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 50.986 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 23s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/clj7oalga3uz2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3101

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3101/display/redirect>

Changes:


------------------------------------------
[...truncated 348.70 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 05, 2022 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 05, 2022 12:45:10 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 05, 2022 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 05, 2022 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 05, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 05, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 05, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 05, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 05, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 05, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 05, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@646482993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 05, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 05, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 05, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 05, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 05, 2022 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 05, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 05, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 05, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5357058098649178897.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Gf0skNU8dtXdRPtUUyNzaxj3cLKTXj7Hu0rin6paWWM.jar
    Mar 05, 2022 12:45:22 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 05, 2022 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 05, 2022 12:45:22 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash bd9f54a94d5b62d531d83f454b8daa68bb9c7110d838467d4aaa9dcbc18e61c1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-vZ9UqU1bYtUx2D9FS42qaLuccRDYOEZ9Sqqdy8GOYcE.pb
    Mar 05, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 05, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 05, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 05, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 05, 2022 12:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-05_04_45_25-16702658601070974133?project=apache-beam-testing
    Mar 05, 2022 12:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-05_04_45_25-16702658601070974133
    Mar 05, 2022 12:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-05_04_45_25-16702658601070974133
    Mar 05, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-05T12:45:27.336Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 05, 2022 12:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:45:49.146Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 05, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:45:49.871Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 05, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:45:49.910Z: Expanding GroupByKey operations into optimizable parts.
    Mar 05, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:45:49.948Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 05, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:45:50.039Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 05, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:45:50.067Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 05, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:45:50.127Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 05, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:45:50.489Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 05, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:45:50.578Z: Starting 5 workers in us-central1-a...
    Mar 05, 2022 12:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:45:57.764Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 05, 2022 12:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:46:19.336Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Mar 05, 2022 12:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:46:19.372Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Mar 05, 2022 12:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:46:29.657Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 05, 2022 12:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:46:53.737Z: Workers have started successfully.
    Mar 05, 2022 12:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:46:53.805Z: Workers have started successfully.
    Mar 05, 2022 12:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:47:25.233Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 05, 2022 12:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:47:25.385Z: Cleaning up.
    Mar 05, 2022 12:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:47:25.473Z: Stopping worker pool...
    Mar 05, 2022 12:49:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:49:53.074Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 05, 2022 12:49:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T12:49:53.127Z: Worker pool stopped.
    Mar 05, 2022 12:49:58 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-05_04_45_25-16702658601070974133 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7a279f55-189e-4dcb-8866-e39c8c5b9b85 and timestamp: 2022-03-05T12:49:58.709000000Z:
                     Metric:                    Value:
                   read_time                     7.052
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 05, 2022 12:49:58 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 53.037 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 39s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/iidh2qg5lulxw

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3100

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3100/display/redirect>

Changes:


------------------------------------------
[...truncated 348.14 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 05, 2022 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 05, 2022 6:44:54 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 05, 2022 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 05, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 05, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 05, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 05, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 05, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 05, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 05, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 05, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1044945601]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 05, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 05, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 05, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 05, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 05, 2022 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 05, 2022 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 05, 2022 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 05, 2022 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1780142748551922573.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Pb8zChqeev75mZ4FmVQtkS6bwMKFY4YCjDNdtTeoN0s.jar
    Mar 05, 2022 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 05, 2022 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 05, 2022 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144982 bytes, hash f699310f97de59b1973c57e94d304f23c2880579cd5bd2e4a130ce2792647a5f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-9pkxD5feWbGXPFfpTTBPI8KIBXnNW9LkoTDOJ5Jkel8.pb
    Mar 05, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 05, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 05, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 05, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 05, 2022 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-04_22_45_10-2707305639417274557?project=apache-beam-testing
    Mar 05, 2022 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-04_22_45_10-2707305639417274557
    Mar 05, 2022 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-04_22_45_10-2707305639417274557
    Mar 05, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-05T06:45:12.227Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 05, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:45:22.178Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 05, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:45:23.082Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 05, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:45:23.129Z: Expanding GroupByKey operations into optimizable parts.
    Mar 05, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:45:23.167Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 05, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:45:23.229Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 05, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:45:23.278Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 05, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:45:23.303Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 05, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:45:23.660Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 05, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:45:23.737Z: Starting 5 workers in us-central1-a...
    Mar 05, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:45:42.228Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 05, 2022 6:46:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:45:59.858Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Mar 05, 2022 6:46:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:45:59.910Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Mar 05, 2022 6:46:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:46:10.273Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 05, 2022 6:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:46:33.629Z: Workers have started successfully.
    Mar 05, 2022 6:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:46:33.662Z: Workers have started successfully.
    Mar 05, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:47:07.631Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 05, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:47:07.772Z: Cleaning up.
    Mar 05, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:47:07.846Z: Stopping worker pool...
    Mar 05, 2022 6:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:49:37.857Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 05, 2022 6:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T06:49:37.906Z: Worker pool stopped.
    Mar 05, 2022 6:50:17 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-04_22_45_10-2707305639417274557 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): fb3b0d93-c40b-4b13-8c63-a642d2a0acdd and timestamp: 2022-03-05T06:50:17.450000000Z:
                     Metric:                    Value:
                   read_time                     7.317
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 05, 2022 6:50:17 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 27.622 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 57s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/cnitpkovhf6km

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3099

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3099/display/redirect?page=changes>

Changes:

[rahuliyer573] py: Import beam plugins before starting SdkHarness

[Valentyn Tymofieiev] Bump numpy bound to include 1.22 and regenerate container deps.

[github-actions] [BEAM-13925] months in date constructor are 0 indexed

[noreply] Remove resolved issue in docs + update class path on sample (#17018)

[noreply] [BEAM-14016] Fixed flaky postcommit test (#17009)

[noreply] Remove resolved issue in notebook

[noreply] [BEAM-13947] Add split() and rsplit(), non-deferred column operations on

[noreply] BEAM-14026 - Fixes bug related to Unnesting nested rows in an array


------------------------------------------
[...truncated 352.92 KB...]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 05, 2022 12:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 05, 2022 12:45:21 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 05, 2022 12:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 05, 2022 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 05, 2022 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 05, 2022 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 05, 2022 12:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 05, 2022 12:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 12:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 05, 2022 12:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 05, 2022 12:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 12:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 05, 2022 12:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1044945601]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 05, 2022 12:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 12:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 12:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 05, 2022 12:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 05, 2022 12:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 05, 2022 12:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 05, 2022 12:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 05, 2022 12:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 05, 2022 12:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 05, 2022 12:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 05, 2022 12:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 05, 2022 12:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2544570552530572789.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-WVVrAeT3ijnXkIRE6rCV9GEQSMUDgdrPFRQEt6ojijc.jar
    Mar 05, 2022 12:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.38.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.38.0-SNAPSHOT-tests-CkGJ26g4UGYUw7qzLSacMoy60n8-a5c4gFqb_JGb87Q.jar
    Mar 05, 2022 12:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.38.0-SNAPSHOT-6WvkkhsYGk4qZRBCzCCEdUcpUzifj46fScl5woWPDL0.jar
    Mar 05, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.dataformat/jackson-dataformat-xml/2.13.0/396634365e8439b27794fa94b571fdc03b4cf7be/jackson-dataformat-xml-2.13.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jackson-dataformat-xml-2.13.0-Yyu4WPqE_RYmgacP7BZkHcWrY2pzow2igQ0noIVmu7o.jar
    Mar 05, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.immutables/value/2.8.8/d99fa1e04af5a1fda42fa9412d68eb7fe17a1071/value-2.8.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/value-2.8.8-k_DQ4kEt4FHC2WzIqOzB87qR2WH_Mw81deYbQ6-miFA.jar
    Mar 05, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.woodstox/woodstox-core/6.2.6/d4a055af5dcadea8d1bded5b64bc7be5bb7fea95/woodstox-core-6.2.6.jar to gs://temp-storage-for-perf-tests/loadtests/staging/woodstox-core-6.2.6-7RMZid5VnxZ00HVSgs8J3XgTkSFBRgr4IvXvHJtaCuE.jar
    Mar 05, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.inject.extensions/guice-servlet/3.0/610cde0e8da5a8b7d8efb8f0b8987466ffebaaf9/guice-servlet-3.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/guice-servlet-3.0-nnKkuFgoiNU8L0KX6TJ2o8FMgogBJEkPLaexap3xxhg.jar
    Mar 05, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mortbay.jetty/servlet-api/2.5-20081211/22bff70037e1e6fa7e6413149489552ee2064702/servlet-api-2.5-20081211.jar to gs://temp-storage-for-perf-tests/loadtests/staging/servlet-api-2.5-20081211-BodWCWmW_gD2BKw7ZnLW9mPcd36kqDBW4kDQRW535HI.jar
    Mar 05, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 362 files cached, 8 files newly uploaded in 1 seconds
    Mar 05, 2022 12:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 05, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144985 bytes, hash 312afb4d035a6975dd3448f2fc90820d24b0390cf2b500cbaf7b22f8646189cd> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-MSr7TQNaaXXdNEjy_JCCDSSwOQzytQDLr3si-GRhic0.pb
    Mar 05, 2022 12:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 05, 2022 12:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 05, 2022 12:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 05, 2022 12:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 05, 2022 12:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-04_16_45_38-14879455153471387194?project=apache-beam-testing
    Mar 05, 2022 12:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-04_16_45_38-14879455153471387194
    Mar 05, 2022 12:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-04_16_45_38-14879455153471387194
    Mar 05, 2022 12:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-05T00:45:39.851Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 05, 2022 12:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:45:56.550Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 05, 2022 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:45:57.362Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 05, 2022 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:45:57.412Z: Expanding GroupByKey operations into optimizable parts.
    Mar 05, 2022 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:45:57.442Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 05, 2022 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:45:57.516Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 05, 2022 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:45:57.543Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 05, 2022 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:45:57.566Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 05, 2022 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:45:57.942Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 05, 2022 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:45:58.020Z: Starting 5 workers in us-central1-b...
    Mar 05, 2022 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:46:20.378Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 05, 2022 12:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:46:44.882Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 05, 2022 12:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:47:10.651Z: Workers have started successfully.
    Mar 05, 2022 12:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:47:10.690Z: Workers have started successfully.
    Mar 05, 2022 12:48:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:47:59.367Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 05, 2022 12:48:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:47:59.612Z: Cleaning up.
    Mar 05, 2022 12:48:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:47:59.961Z: Stopping worker pool...
    Mar 05, 2022 12:50:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:50:24.986Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 05, 2022 12:50:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-05T00:50:25.348Z: Worker pool stopped.
    Mar 05, 2022 12:50:34 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-04_16_45_38-14879455153471387194 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e61f11ba-ecd9-4658-a8b1-7a795b9404fc and timestamp: 2022-03-05T00:50:34.627000000Z:
                     Metric:                    Value:
                   read_time                     9.737
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 05, 2022 12:50:34 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 5 mins 18.343 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 14s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/eysynvjuhlhag

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3098

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3098/display/redirect?page=changes>

Changes:

[stephen.patel] BEAM-14011 fix s3 filesystem multipart copy

[noreply] Merge pull request #16842 from [BEAM-13932][Playground] Container's user

[noreply] Doc updates and blog post for 2.37.0 (#16887)


------------------------------------------
[...truncated 346.51 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 99c142c6f67057c0c3f790a63784b902
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 04, 2022 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 04, 2022 6:44:57 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 04, 2022 6:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 04, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 04, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 04, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 04, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 04, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 04, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 04, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 04, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1044945601]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 04, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 04, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 04, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 04, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 04, 2022 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 04, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 04, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 04, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7872493547522177594.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-zk1gIGrqSQ_5pHKMTui1TGXA5zveasPBR6aqBPVzr6o.jar
    Mar 04, 2022 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 04, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 04, 2022 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash 9deaea1549c6af9201c02dbe73de9d8fc56e9d8d922416a91f1cf89378321080> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-nerqFUnGr5IBwC2-c96dj8VunY2SJBapHxz4k3gyEIA.pb
    Mar 04, 2022 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 04, 2022 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 04, 2022 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 04, 2022 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 04, 2022 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-04_10_45_14-13321804806333573010?project=apache-beam-testing
    Mar 04, 2022 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-04_10_45_14-13321804806333573010
    Mar 04, 2022 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-04_10_45_14-13321804806333573010
    Mar 04, 2022 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-04T18:45:15.352Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 04, 2022 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:45:30.485Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 04, 2022 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:45:31.271Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 04, 2022 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:45:31.310Z: Expanding GroupByKey operations into optimizable parts.
    Mar 04, 2022 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:45:31.339Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 04, 2022 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:45:31.411Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 04, 2022 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:45:31.439Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 04, 2022 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:45:31.465Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 04, 2022 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:45:32.051Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 04, 2022 6:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:45:32.162Z: Starting 5 workers in us-central1-a...
    Mar 04, 2022 6:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:45:35.046Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 04, 2022 6:46:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:46:11.048Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 04, 2022 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:46:35.091Z: Workers have started successfully.
    Mar 04, 2022 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:46:35.169Z: Workers have started successfully.
    Mar 04, 2022 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:47:06.298Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 04, 2022 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:47:06.466Z: Cleaning up.
    Mar 04, 2022 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:47:06.560Z: Stopping worker pool...
    Mar 04, 2022 6:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:49:30.903Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 04, 2022 6:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T18:49:32.549Z: Worker pool stopped.
    Mar 04, 2022 6:49:45 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-04_10_45_14-13321804806333573010 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 35aaf4be-3709-475b-b0c5-f5db4bb3e17b and timestamp: 2022-03-04T18:49:45.714000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.932

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 04, 2022 6:49:45 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 52.853 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 23s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/h7a44o5qjf53a

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3097

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3097/display/redirect>

Changes:


------------------------------------------
[...truncated 346.32 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 99c142c6f67057c0c3f790a63784b902
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 04, 2022 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 04, 2022 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 04, 2022 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 04, 2022 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 04, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 04, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 04, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 04, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 04, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 04, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 04, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@646482993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 04, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 04, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 04, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 04, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 04, 2022 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 04, 2022 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 04, 2022 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 04, 2022 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7034387175127142621.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-4nRtnyHIxuBq77YqOnGmUx_HBsl-9GEcSB6XBrcL7MM.jar
    Mar 04, 2022 12:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 4 seconds
    Mar 04, 2022 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 04, 2022 12:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash b294f08985343d7a048bcc579607820b372e4ecaba9be96952508f275e221f56> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-spTwiYU0PXoEi8xXlgeCCzcuTsq6m-lpUlCPJ14iH1Y.pb
    Mar 04, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 04, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 04, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 04, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 04, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-04_04_45_16-5072069281593687663?project=apache-beam-testing
    Mar 04, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-04_04_45_16-5072069281593687663
    Mar 04, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-04_04_45_16-5072069281593687663
    Mar 04, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-04T12:45:17.576Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 04, 2022 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:45:37.915Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 04, 2022 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:45:38.825Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 04, 2022 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:45:38.864Z: Expanding GroupByKey operations into optimizable parts.
    Mar 04, 2022 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:45:38.901Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 04, 2022 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:45:39.003Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 04, 2022 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:45:39.031Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 04, 2022 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:45:39.064Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 04, 2022 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:45:39.429Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 04, 2022 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:45:39.508Z: Starting 5 workers in us-central1-b...
    Mar 04, 2022 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:45:45.012Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 04, 2022 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:46:33.531Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 04, 2022 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:47:00.000Z: Workers have started successfully.
    Mar 04, 2022 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:47:00.046Z: Workers have started successfully.
    Mar 04, 2022 12:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:47:33.942Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 04, 2022 12:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:47:34.111Z: Cleaning up.
    Mar 04, 2022 12:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:47:34.178Z: Stopping worker pool...
    Mar 04, 2022 12:50:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:50:04.899Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 04, 2022 12:50:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T12:50:04.937Z: Worker pool stopped.
    Mar 04, 2022 12:50:12 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-04_04_45_16-5072069281593687663 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7339b56a-1dcb-49bb-ba48-fdc7c017b63a and timestamp: 2022-03-04T12:50:12.953000000Z:
                     Metric:                    Value:
                   read_time                     9.296
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 04, 2022 12:50:13 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 21.513 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 52s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/cc2zo4reeojkm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3096

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3096/display/redirect>

Changes:


------------------------------------------
[...truncated 349.87 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 99c142c6f67057c0c3f790a63784b902
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 04, 2022 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 04, 2022 6:45:24 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 04, 2022 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 04, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 6:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 04, 2022 6:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 6:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 6:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 04, 2022 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 04, 2022 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 04, 2022 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 04, 2022 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 04, 2022 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 04, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@646482993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 04, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 04, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 04, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 04, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 04, 2022 6:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 04, 2022 6:45:36 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 04, 2022 6:45:36 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 04, 2022 6:45:36 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5230011643890740880.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-cCib6GTd2_ZKmPCbCeD5xcTtCW-o7xVwxTIsswbWPo4.jar
    Mar 04, 2022 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 04, 2022 6:45:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 04, 2022 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash d71038a69f0bdbdccc63f21104478466e48650da3410f08d58092ae851853919> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-1xA4pp8L29zMY_IRBEeEZuSGUNo0EPCNWAkq6FGFORk.pb
    Mar 04, 2022 6:45:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 04, 2022 6:45:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 04, 2022 6:45:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 04, 2022 6:45:40 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 04, 2022 6:45:42 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-03_22_45_41-17793742680833769503?project=apache-beam-testing
    Mar 04, 2022 6:45:42 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-03_22_45_41-17793742680833769503
    Mar 04, 2022 6:45:42 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-03_22_45_41-17793742680833769503
    Mar 04, 2022 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-04T06:45:42.732Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 04, 2022 6:45:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:45:56.797Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:45:57.722Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:45:57.764Z: Expanding GroupByKey operations into optimizable parts.
    Mar 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:45:57.800Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:45:57.873Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:45:57.898Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:45:57.942Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:45:58.281Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:45:58.354Z: Starting 5 workers in us-central1-a...
    Mar 04, 2022 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:46:24.333Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 04, 2022 6:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:46:44.291Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 04, 2022 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:47:08.112Z: Workers have started successfully.
    Mar 04, 2022 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:47:08.129Z: Workers have started successfully.
    Mar 04, 2022 6:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:47:40.507Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 04, 2022 6:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:47:40.689Z: Cleaning up.
    Mar 04, 2022 6:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:47:40.776Z: Stopping worker pool...
    Mar 04, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:50:06.436Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 04, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T06:50:06.642Z: Worker pool stopped.
    Mar 04, 2022 6:50:13 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-03_22_45_41-17793742680833769503 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 20c34b73-ea1b-4b40-958b-57e5702aaf82 and timestamp: 2022-03-04T06:50:13.415000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      7.34

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 04, 2022 6:50:13 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 53.951 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 46s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/nveypxazbce3k

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3095

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3095/display/redirect?page=changes>

Changes:

[stranniknm] [BEAM-13999] playground - support vertical orientation for graph

[noreply] [Cleanup] Update pre-v2 go package references (#17002)

[noreply] [BEAM-13885] Add unit tests to window package (#16971)

[noreply] Merge pull request #16891 from [BEAM-13872] [Playground] Increase test

[noreply] Merge pull request #16912 from [BEAM-13878] [Playground] Increase test

[noreply] Merge pull request #16946 from [BEAM-13873] [Playground] Increase test

[noreply] [BEAM-13951] Update mass_comment.py list of Run commands (#16889)

[noreply] [BEAM-10652] Allow Clustering without Partition in BigQuery (#16578)

[noreply] [BEAM-13857] Add K:V flags for expansion service jars and addresses to


------------------------------------------
[...truncated 352.40 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 99c142c6f67057c0c3f790a63784b902
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 04, 2022 12:45:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 04, 2022 12:45:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 04, 2022 12:45:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 04, 2022 12:45:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 12:45:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 12:45:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 04, 2022 12:45:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 12:45:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 12:45:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 04, 2022 12:45:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 04, 2022 12:46:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 04, 2022 12:46:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 12:46:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 04, 2022 12:46:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 04, 2022 12:46:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 12:46:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 04, 2022 12:46:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1044945601]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 04, 2022 12:46:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 12:46:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 12:46:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 04, 2022 12:46:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 04, 2022 12:46:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 04, 2022 12:46:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 04, 2022 12:46:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 04, 2022 12:46:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 04, 2022 12:46:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 04, 2022 12:46:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 04, 2022 12:46:06 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 04, 2022 12:46:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8061024345216270208.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-oClxwzXmH_xFp1-AYwcO8EqoQwT5POSN7RlVZg59LSE.jar
    Mar 04, 2022 12:46:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 04, 2022 12:46:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 04, 2022 12:46:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash 46ded5eb3e06c5553d32e6d2bf5af2d9caf626b14dc75783e2599f533aa1fda5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Rt7V6z4GxVU9MubSv1ry2cr2JrFNx1eD4lmfUzqh_aU.pb
    Mar 04, 2022 12:46:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 04, 2022 12:46:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 04, 2022 12:46:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 04, 2022 12:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 04, 2022 12:46:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-03_16_46_11-4647715438770269555?project=apache-beam-testing
    Mar 04, 2022 12:46:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-03_16_46_11-4647715438770269555
    Mar 04, 2022 12:46:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-03_16_46_11-4647715438770269555
    Mar 04, 2022 12:46:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-04T00:46:12.825Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 04, 2022 12:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:46:31.316Z: Worker configuration: e2-standard-2 in us-central1-c.
    Mar 04, 2022 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:46:32.241Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 04, 2022 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:46:32.345Z: Expanding GroupByKey operations into optimizable parts.
    Mar 04, 2022 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:46:32.405Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 04, 2022 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:46:32.551Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 04, 2022 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:46:32.589Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 04, 2022 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:46:32.635Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 04, 2022 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:46:32.991Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 04, 2022 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:46:33.074Z: Starting 5 workers in us-central1-c...
    Mar 04, 2022 12:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:46:48.287Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 04, 2022 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:47:21.400Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 04, 2022 12:47:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:47:48.247Z: Workers have started successfully.
    Mar 04, 2022 12:47:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:47:48.444Z: Workers have started successfully.
    Mar 04, 2022 12:48:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:48:22.555Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 04, 2022 12:48:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:48:23.853Z: Cleaning up.
    Mar 04, 2022 12:48:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:48:24.343Z: Stopping worker pool...
    Mar 04, 2022 12:50:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:50:53.249Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 04, 2022 12:50:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-04T00:50:53.566Z: Worker pool stopped.
    Mar 04, 2022 12:51:04 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-03_16_46_11-4647715438770269555 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ce937447-f203-48d9-ae1f-42aeedc445e4 and timestamp: 2022-03-04T00:51:04.738000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.967

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 04, 2022 12:51:04 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 5 mins 15.907 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 40s
165 actionable tasks: 106 executed, 57 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ua4u4gqgqq2te

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3094

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3094/display/redirect?page=changes>

Changes:

[noreply] [adhoc] Prepare aws2 ClientConfiguration for json serialization and

[noreply] Merge pull request #16879 from [BEAM-12164] Add javadocs to


------------------------------------------
[...truncated 349.81 KB...]
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 03, 2022 6:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 03, 2022 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 03, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 03, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 03, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 03, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 03, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 03, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 03, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 03, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@646482993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 03, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 03, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 03, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 03, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 03, 2022 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 03, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 03, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 03, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2985300300093909384.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-l0mIMOY6fZ9ZW2atZW4ZWYKJ6_IGZ13sIrxt_w93BHU.jar
    Mar 03, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 03, 2022 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 03, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144985 bytes, hash 94836e11eda0feb875037b926f41f15285c8a5983f888684356548e70e9f1c56> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-lINuEe2g_rh1A3uSb0HxUoXIpZg_iIaENWVI5w6fHFY.pb
    Mar 03, 2022 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 03, 2022 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 03, 2022 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 03, 2022 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 03, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-03_10_45_08-8959569593918474806?project=apache-beam-testing
    Mar 03, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-03_10_45_08-8959569593918474806
    Mar 03, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-03_10_45_08-8959569593918474806
    Mar 03, 2022 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-03T18:45:10.835Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 03, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:45:19.615Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 03, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:45:20.448Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 03, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:45:20.510Z: Expanding GroupByKey operations into optimizable parts.
    Mar 03, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:45:20.538Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 03, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:45:20.591Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 03, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:45:20.620Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 03, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:45:20.652Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 03, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:45:20.975Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 03, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:45:21.064Z: Starting 5 workers in us-central1-b...
    Mar 03, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:45:22.633Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 03, 2022 6:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:46:03.412Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Mar 03, 2022 6:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:46:03.450Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Mar 03, 2022 6:46:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:46:13.844Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 03, 2022 6:46:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:46:37.990Z: Workers have started successfully.
    Mar 03, 2022 6:46:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:46:38.027Z: Workers have started successfully.
    Mar 03, 2022 6:48:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-03T18:48:22.103Z: Staged package bcpkix-jdk15on-1.67-77ynVIgM45IspHpDwfC3LEVzFFCg7xk7nbM79Ls4zl8.jar at location 'gs://temp-storage-for-perf-tests/loadtests/staging/bcpkix-jdk15on-1.67-77ynVIgM45IspHpDwfC3LEVzFFCg7xk7nbM79Ls4zl8.jar' is inaccessible.
    Mar 03, 2022 6:48:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-03T18:48:22.169Z: Staged package bcprov-jdk15on-1.67--gBBo2-fIK88a42_brSalp4snMApBJ1hrMUmujJHs-8.jar at location 'gs://temp-storage-for-perf-tests/loadtests/staging/bcprov-jdk15on-1.67--gBBo2-fIK88a42_brSalp4snMApBJ1hrMUmujJHs-8.jar' is inaccessible.
    Mar 03, 2022 6:48:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-03T18:48:27.066Z: Staged package kafka-clients-2.4.1-or4Vrm01S3aXE64MuzGE8iWqFvyYIFseskr9n8KjYNk.jar at location 'gs://temp-storage-for-perf-tests/loadtests/staging/kafka-clients-2.4.1-or4Vrm01S3aXE64MuzGE8iWqFvyYIFseskr9n8KjYNk.jar' is inaccessible.
    Mar 03, 2022 6:48:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-03T18:48:27.253Z: Staged package lz4-java-1.6.0-0ilUWqKx1SA8h2YUvbz_yswwNpf0-PJvdk4dbB7S5BY.jar at location 'gs://temp-storage-for-perf-tests/loadtests/staging/lz4-java-1.6.0-0ilUWqKx1SA8h2YUvbz_yswwNpf0-PJvdk4dbB7S5BY.jar' is inaccessible.
    Mar 03, 2022 6:48:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-03T18:48:27.889Z: Staged package opencensus-proto-0.2.0-DBktRR6d106Ychsn0C8OK2vKRLUVY7Xavy4hH3o-vxM.jar at location 'gs://temp-storage-for-perf-tests/loadtests/staging/opencensus-proto-0.2.0-DBktRR6d106Ychsn0C8OK2vKRLUVY7Xavy4hH3o-vxM.jar' is inaccessible.
    Mar 03, 2022 6:48:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-03T18:48:28.552Z: Staged package re2j-1.5-wGL2flsRxmZQ5FwPQgsdV2jouACbCztdr5vMWICniUw.jar at location 'gs://temp-storage-for-perf-tests/loadtests/staging/re2j-1.5-wGL2flsRxmZQ5FwPQgsdV2jouACbCztdr5vMWICniUw.jar' is inaccessible.
    Mar 03, 2022 6:48:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-03-03T18:48:28.940Z: Staged package threetenbp-1.5.2-bEdl_gfgN6PmHz-5FShcx0Qm1_Jv0MPlW9EuqmKn93o.jar at location 'gs://temp-storage-for-perf-tests/loadtests/staging/threetenbp-1.5.2-bEdl_gfgN6PmHz-5FShcx0Qm1_Jv0MPlW9EuqmKn93o.jar' is inaccessible.
    Mar 03, 2022 6:48:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-03T18:48:29.181Z: One or more access checks for temp location or staged files failed. Please refer to other error messages for details. For more information on security and permissions, please see https://cloud.google.com/dataflow/security-and-permissions.
    Mar 03, 2022 6:50:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:50:41.812Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 03, 2022 6:50:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:50:41.962Z: Cleaning up.
    Mar 03, 2022 6:50:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:50:42.030Z: Stopping worker pool...
    Mar 03, 2022 6:53:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:53:03.682Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 03, 2022 6:53:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T18:53:03.765Z: Worker pool stopped.
    Mar 03, 2022 6:53:21 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-03_10_45_08-8959569593918474806 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f665c961-b7e6-4d8d-922f-c06c6402e8e9 and timestamp: 2022-03-03T18:53:21.659000000Z:
                     Metric:                    Value:
                   read_time                    15.087
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 03, 2022 6:53:21 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.04 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.082 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 8 mins 32.64 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 9m
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/5lvcwip2vrbee

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3093

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3093/display/redirect>

Changes:


------------------------------------------
[...truncated 346.66 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 79d5b027de8ec43434045f271db56481
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 03, 2022 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 03, 2022 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 03, 2022 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 03, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 03, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 03, 2022 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 03, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 03, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 03, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 03, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 03, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1044945601]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 03, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 03, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 03, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 03, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 03, 2022 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 03, 2022 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 03, 2022 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 03, 2022 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8801640118088587940.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-kslTizlpr82YAYbvfr0goz3O0jIceMgc3PHe7jhC2jY.jar
    Mar 03, 2022 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 03, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 03, 2022 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash 1b1c7cf9cedaa29940cf28a3d34db2d295174c84c1c99f02cb04af8180576d7a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Gxx8-c7aoplAzyij002y0pUXTITByZ8CywSvgYBXbXo.pb
    Mar 03, 2022 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 03, 2022 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 03, 2022 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 03, 2022 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 03, 2022 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-03_04_45_10-10131891109301368352?project=apache-beam-testing
    Mar 03, 2022 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-03_04_45_10-10131891109301368352
    Mar 03, 2022 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-03_04_45_10-10131891109301368352
    Mar 03, 2022 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-03T12:45:11.126Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 03, 2022 12:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:45:31.372Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 03, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:45:32.666Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 03, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:45:32.702Z: Expanding GroupByKey operations into optimizable parts.
    Mar 03, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:45:32.738Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 03, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:45:32.814Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 03, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:45:32.849Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 03, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:45:32.874Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 03, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:45:33.277Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 03, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:45:33.358Z: Starting 5 workers in us-central1-a...
    Mar 03, 2022 12:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:45:46.326Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 03, 2022 12:46:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:46:13.420Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 03, 2022 12:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:46:36.765Z: Workers have started successfully.
    Mar 03, 2022 12:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:46:36.799Z: Workers have started successfully.
    Mar 03, 2022 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:47:13.667Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 03, 2022 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:47:13.820Z: Cleaning up.
    Mar 03, 2022 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:47:13.915Z: Stopping worker pool...
    Mar 03, 2022 12:49:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:49:45.412Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 03, 2022 12:49:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T12:49:45.606Z: Worker pool stopped.
    Mar 03, 2022 12:49:58 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-03_04_45_10-10131891109301368352 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 137b11da-12de-4bef-b617-f66ad14ef9cd and timestamp: 2022-03-03T12:49:58.146000000Z:
                     Metric:                    Value:
                   read_time                    10.433
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 03, 2022 12:49:58 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 5 mins 8.392 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 36s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/xxlzqol56s6j4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3092

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3092/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13903] Improve coverage of metricsx package (#16994)

[noreply] [BEAM-13892] Improve coverage of avroio package (#16990)


------------------------------------------
[...truncated 350.15 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 79d5b027de8ec43434045f271db56481
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 03, 2022 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 03, 2022 6:45:22 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 03, 2022 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 03, 2022 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 03, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 03, 2022 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 03, 2022 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 03, 2022 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 03, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 03, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 03, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1044945601]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 03, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 03, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 03, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 03, 2022 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 03, 2022 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 03, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 03, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 03, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8100692949725852214.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-2m0QWg9qbgrzPM7e1cSakoWvqDOHgPWlVCOkyIRFUDU.jar
    Mar 03, 2022 6:45:35 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 03, 2022 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 03, 2022 6:45:35 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash e5326ed5c4d77351f2fab401c379eccf64807b8951da2a7e8f8d0b30b5808df8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-5TJu1cTXc1Hy-rQBw3nsz2SAe4lR2ip-j40LMLWAjfg.pb
    Mar 03, 2022 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 03, 2022 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 03, 2022 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 03, 2022 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 03, 2022 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-02_22_45_38-12730640897565352796?project=apache-beam-testing
    Mar 03, 2022 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-02_22_45_38-12730640897565352796
    Mar 03, 2022 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-02_22_45_38-12730640897565352796
    Mar 03, 2022 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-03T06:45:39.914Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 03, 2022 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:45:51.217Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 03, 2022 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:45:52.034Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 03, 2022 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:45:52.074Z: Expanding GroupByKey operations into optimizable parts.
    Mar 03, 2022 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:45:52.114Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 03, 2022 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:45:52.186Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 03, 2022 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:45:52.235Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 03, 2022 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:45:52.266Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 03, 2022 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:45:52.620Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 03, 2022 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:45:52.709Z: Starting 5 workers in us-central1-a...
    Mar 03, 2022 6:46:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:46:15.829Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 03, 2022 6:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:46:29.370Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 03, 2022 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:47:02.593Z: Workers have started successfully.
    Mar 03, 2022 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:47:02.621Z: Workers have started successfully.
    Mar 03, 2022 6:47:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:47:35.268Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 03, 2022 6:47:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:47:35.433Z: Cleaning up.
    Mar 03, 2022 6:47:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:47:35.522Z: Stopping worker pool...
    Mar 03, 2022 6:50:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:50:05.187Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 03, 2022 6:50:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T06:50:05.241Z: Worker pool stopped.
    Mar 03, 2022 6:50:11 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-02_22_45_38-12730640897565352796 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c98d6e96-e936-405e-a486-0db25aacd674 and timestamp: 2022-03-03T06:50:11.447000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      8.15

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 03, 2022 6:50:11 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 54.006 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 49s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/vbsnz7dtu5nxi

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3091

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3091/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13973] Link Dataproc Flink master URLs to the InteractiveRunner

[noreply] [BEAM-13925] Turn pr bot on for go prs (#16984)

[Pablo Estrada] Skipping flaky sad-path tests for Spanner changestreams

[noreply] [BEAM-13964] Bump kotlin to 1.6.x (#16882)

[noreply] Merge pull request #16906: [BEAM-13974] Handle idle Storage Api streams

[noreply] Merge pull request #16562 from [BEAM-13051][D] Enable pylint warnings

[noreply] [BEAM-13925] A couple small pr-bot bug fixes (#16996)

[noreply] [BEAM-14029] Add getter, setter for target maven repo (#16995)


------------------------------------------
[...truncated 361.50 KB...]
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 03, 2022 12:55:16 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 03, 2022 12:55:17 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 03, 2022 12:55:17 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 03, 2022 12:55:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 12:55:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 12:55:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 03, 2022 12:55:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 12:55:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 12:55:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 03, 2022 12:55:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 03, 2022 12:55:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 03, 2022 12:55:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 12:55:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 03, 2022 12:55:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 03, 2022 12:55:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 12:55:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 03, 2022 12:55:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@646482993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 03, 2022 12:55:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 12:55:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 12:55:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 03, 2022 12:55:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 03, 2022 12:55:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 03, 2022 12:55:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 03, 2022 12:55:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 03, 2022 12:55:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 03, 2022 12:55:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 03, 2022 12:55:30 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 03, 2022 12:55:30 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 03, 2022 12:55:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.38.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.38.0-SNAPSHOT-tests-5MzT8586lv6tUphlJ8HCxA2CBBfMbw1BILtgME463mM.jar
    Mar 03, 2022 12:55:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.38.0-SNAPSHOT-ep2upgjHH2S4bzhh55Hpyc9YuXuSuYYZThndQgr8pfE.jar
    Mar 03, 2022 12:55:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-Ioq3ST3Y71byKWCxbA2oiLIXfm_xx-c6z8dXhw6z8kY.jar
    Mar 03, 2022 12:55:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5848094968074329756.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-wB91Oz8KauzQwZowJMHQSwB6BquFMwfFJRIvuEHlsb4.jar
    Mar 03, 2022 12:55:32 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 4 files newly uploaded in 2 seconds
    Mar 03, 2022 12:55:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 03, 2022 12:55:32 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash d79a8c28fd2900ff0989808ca299b9c3bc3d9f276da92af6016bdcbf96a9cceb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-15qMKP0pAP8JiYCMopm5w7w9nydtqSr2AWvcv5apzOs.pb
    Mar 03, 2022 12:55:36 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 03, 2022 12:55:36 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 03, 2022 12:55:36 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 03, 2022 12:55:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 03, 2022 12:55:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-02_16_55_37-13427141278151867852?project=apache-beam-testing
    Mar 03, 2022 12:55:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-02_16_55_37-13427141278151867852
    Mar 03, 2022 12:55:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-02_16_55_37-13427141278151867852
    Mar 03, 2022 12:55:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-03T00:55:39.057Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 03, 2022 12:55:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:55:52.431Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 03, 2022 12:55:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:55:53.260Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 03, 2022 12:55:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:55:53.316Z: Expanding GroupByKey operations into optimizable parts.
    Mar 03, 2022 12:55:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:55:53.354Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 03, 2022 12:55:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:55:53.454Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 03, 2022 12:55:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:55:53.493Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 03, 2022 12:55:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:55:53.523Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 03, 2022 12:55:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:55:53.954Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 03, 2022 12:55:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:55:54.034Z: Starting 5 workers in us-central1-a...
    Mar 03, 2022 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:56:15.687Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 03, 2022 12:56:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:56:26.315Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Mar 03, 2022 12:56:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:56:26.350Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Mar 03, 2022 12:56:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:56:36.625Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 03, 2022 12:57:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:56:59.898Z: Workers have started successfully.
    Mar 03, 2022 12:57:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:56:59.927Z: Workers have started successfully.
    Mar 03, 2022 12:57:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:57:32.371Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 03, 2022 12:57:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:57:32.546Z: Cleaning up.
    Mar 03, 2022 12:57:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T00:57:32.620Z: Stopping worker pool...
    Mar 03, 2022 1:00:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T01:00:05.745Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 03, 2022 1:00:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-03T01:00:05.872Z: Worker pool stopped.
    Mar 03, 2022 1:00:12 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-02_16_55_37-13427141278151867852 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): fe7f59c3-0219-42ac-9134-adf5adf77340 and timestamp: 2022-03-03T01:00:12.416000000Z:
                     Metric:                    Value:
                   read_time                      9.03
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 03, 2022 1:00:12 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 5 mins 0.439 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 9m 43s
165 actionable tasks: 109 executed, 54 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/sqlta6xipl7na

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3090

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3090/display/redirect?page=changes>

Changes:

[Alexey Romanenko] Bump org.mongodb:mongo-java-driver to 3.12.10


------------------------------------------
[...truncated 357.18 KB...]
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 02, 2022 6:47:45 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 02, 2022 6:47:46 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 02, 2022 6:47:47 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 02, 2022 6:47:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 6:47:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 6:47:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 02, 2022 6:47:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 6:47:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 6:47:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 02, 2022 6:47:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 02, 2022 6:47:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 02, 2022 6:47:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 6:47:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 02, 2022 6:47:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 02, 2022 6:47:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 6:47:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 02, 2022 6:47:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@646482993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 02, 2022 6:47:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 6:47:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 6:47:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 02, 2022 6:47:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 6:47:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 6:47:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 02, 2022 6:47:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 02, 2022 6:47:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 02, 2022 6:47:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 02, 2022 6:47:57 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 02, 2022 6:47:57 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 02, 2022 6:47:57 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8561766979390015261.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-fuMQiUyEciNsgpKjBMCcRSGT396XHFT8R_BG8BN5zKg.jar
    Mar 02, 2022 6:47:57 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.14.1/85348a0c44c298bbec5ae747e67ae12e60b3aef6/libthrift-0.14.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/libthrift-0.14.1-WzUQ_nLm8HJeKc7269seq6zMxp15_E7VC2gWAKh2Z-w.jar
    Mar 02, 2022 6:47:58 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_28_0/0.2/b8ec320b972b575ab37767bf8d4cfadff1fe304a/beam-vendor-calcite-1_28_0-0.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_28_0-0.2-pvjNvR5NntriHz0ja9OKytRXl6tOlifYWKUI0wMGtVo.jar
    Mar 02, 2022 6:47:58 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mongodb/mongo-java-driver/3.12.10/7235fa6d2e74f57c2faa6f3a2cd83ca92e67fb8d/mongo-java-driver-3.12.10.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mongo-java-driver-3.12.10-xkI1vh2cX7kLjABBul2_x0oSXY8kBPngHyESGk4LsHQ.jar
    Mar 02, 2022 6:47:58 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/8.5.46/5d686394334d143f48251827435ab086a161e75e/tomcat-embed-core-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-embed-core-8.5.46-vl-FREjS7l1uADb-srT3ExYweaG2uaepdQjlWRetNcI.jar
    Mar 02, 2022 6:47:58 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat/tomcat-annotations-api/8.5.46/56c67699de192c603afd6f029e80e5ff8d98e7e9/tomcat-annotations-api-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-annotations-api-8.5.46-amtG0OaVhkRRTAyjZYs7B-YSOmgqIO4203lSQnNfq8M.jar
    Mar 02, 2022 6:47:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 364 files cached, 6 files newly uploaded in 2 seconds
    Mar 02, 2022 6:47:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 02, 2022 6:47:59 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash 57b3bc929df14f68398f8c7f8d0792475c7cc1f21c7bec6089b18aa4ba42a02f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-V7O8kp3xT2g5j4x_jQeSR1x8wfIce-xgibGKpLpCoC8.pb
    Mar 02, 2022 6:48:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 02, 2022 6:48:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 02, 2022 6:48:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 02, 2022 6:48:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 02, 2022 6:48:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-02_10_48_03-4828047690844734917?project=apache-beam-testing
    Mar 02, 2022 6:48:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-02_10_48_03-4828047690844734917
    Mar 02, 2022 6:48:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-02_10_48_03-4828047690844734917
    Mar 02, 2022 6:48:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-02T18:48:05.014Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 02, 2022 6:48:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:48:18.745Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 02, 2022 6:48:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:48:19.588Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 02, 2022 6:48:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:48:19.630Z: Expanding GroupByKey operations into optimizable parts.
    Mar 02, 2022 6:48:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:48:19.658Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 02, 2022 6:48:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:48:19.737Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 02, 2022 6:48:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:48:19.766Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 02, 2022 6:48:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:48:19.789Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 02, 2022 6:48:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:48:20.172Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 02, 2022 6:48:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:48:20.251Z: Starting 5 workers in us-central1-f...
    Mar 02, 2022 6:48:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:48:28.570Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 02, 2022 6:49:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:49:12.532Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 02, 2022 6:49:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:49:36.111Z: Workers have started successfully.
    Mar 02, 2022 6:49:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:49:36.138Z: Workers have started successfully.
    Mar 02, 2022 6:50:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:50:11.952Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 02, 2022 6:50:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:50:12.107Z: Cleaning up.
    Mar 02, 2022 6:50:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:50:12.193Z: Stopping worker pool...
    Mar 02, 2022 6:52:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:52:38.771Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 02, 2022 6:52:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T18:52:38.821Z: Worker pool stopped.
    Mar 02, 2022 6:53:06 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-02_10_48_03-4828047690844734917 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 45f0cf4f-a99e-4d02-abce-b80f59adfe84 and timestamp: 2022-03-02T18:53:06.573000000Z:
                     Metric:                    Value:
                   read_time                     9.663
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 02, 2022 6:53:06 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 24.797 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 17s
165 actionable tasks: 110 executed, 54 from cache, 1 up-to-date

Publishing build scan...
https://gradle.com/s/qumqioeb4gefk

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3089

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3089/display/redirect>

Changes:


------------------------------------------
[...truncated 349.34 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 02, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 02, 2022 12:45:09 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 02, 2022 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 02, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 02, 2022 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 02, 2022 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1044945601]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 02, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 02, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 02, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 02, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 02, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7535218065088114653.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-WpZ09mtHe-bKTG0DW_FPl5NDpsNm96QCGgqlSEiY9P0.jar
    Mar 02, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 02, 2022 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 02, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144984 bytes, hash 0bc1d623dbcd29e7bd6fd7a2c773f4908f2ce40bc64185e46f8688ccd5d389ec> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-C8HWI9vNKee9b9eix3P0kI8s5AvGQYXkb4aIzNXTiew.pb
    Mar 02, 2022 12:45:24 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 02, 2022 12:45:24 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 02, 2022 12:45:24 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 02, 2022 12:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 02, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-02_04_45_24-13165607155379119935?project=apache-beam-testing
    Mar 02, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-02_04_45_24-13165607155379119935
    Mar 02, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-02_04_45_24-13165607155379119935
    Mar 02, 2022 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-02T12:45:25.703Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 02, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:45:36.050Z: Worker configuration: e2-standard-2 in us-central1-a.
    Mar 02, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:45:36.661Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 02, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:45:36.709Z: Expanding GroupByKey operations into optimizable parts.
    Mar 02, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:45:36.733Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 02, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:45:36.811Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 02, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:45:36.842Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 02, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:45:36.878Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 02, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:45:37.253Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 02, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:45:37.352Z: Starting 5 workers in us-central1-a...
    Mar 02, 2022 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:45:39.245Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 02, 2022 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:46:12.911Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Mar 02, 2022 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:46:12.985Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Mar 02, 2022 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:46:23.307Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 02, 2022 12:46:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:46:45.214Z: Workers have started successfully.
    Mar 02, 2022 12:46:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:46:45.344Z: Workers have started successfully.
    Mar 02, 2022 12:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:47:17.110Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 02, 2022 12:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:47:17.261Z: Cleaning up.
    Mar 02, 2022 12:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:47:17.359Z: Stopping worker pool...
    Mar 02, 2022 12:49:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:49:43.790Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 02, 2022 12:49:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T12:49:43.845Z: Worker pool stopped.
    Mar 02, 2022 12:49:51 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-02_04_45_24-13165607155379119935 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7b2f90f5-5616-4d5e-9f46-2d43ecf9761b and timestamp: 2022-03-02T12:49:51.071000000Z:
                     Metric:                    Value:
                   read_time                     8.406
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 02, 2022 12:49:51 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 46.075 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 32s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/w33j34qpln5nu

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3088

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3088/display/redirect>

Changes:


------------------------------------------
[...truncated 348.50 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is b79bd7a887a1fd865a143a3af492df65
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 02, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 02, 2022 6:45:15 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 02, 2022 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 02, 2022 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 02, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 02, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@735200015]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 02, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 02, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 02, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 02, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 02, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@646482993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 02, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 02, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 02, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 02, 2022 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 02, 2022 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 02, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 02, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 02, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test673820032590384126.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-sg3adZNd__v9_6_1KHuurSxCqbtnzdSqEAocFtQoSPw.jar
    Mar 02, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 0 seconds
    Mar 02, 2022 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 02, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144982 bytes, hash 09c9eee586c4c5700c5d30778eccf136f9794f45b8130b3d5bf1af9b1a669c5b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Ccnu5YbExXAMXTB3jszxNvl5T0W4Ews9W_GvmxpmnFs.pb
    Mar 02, 2022 6:45:29 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 02, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 02, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 02, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 02, 2022 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-01_22_45_30-838214627500681935?project=apache-beam-testing
    Mar 02, 2022 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-01_22_45_30-838214627500681935
    Mar 02, 2022 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-01_22_45_30-838214627500681935
    Mar 02, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-02T06:45:31.814Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 02, 2022 6:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:45:45.104Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 02, 2022 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:45:45.857Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 02, 2022 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:45:45.898Z: Expanding GroupByKey operations into optimizable parts.
    Mar 02, 2022 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:45:45.927Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 02, 2022 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:45:46.017Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 02, 2022 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:45:46.041Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 02, 2022 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:45:46.073Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 02, 2022 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:45:46.454Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 02, 2022 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:45:46.527Z: Starting 5 workers in us-central1-f...
    Mar 02, 2022 6:46:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:46:11.509Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 02, 2022 6:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:46:35.600Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 02, 2022 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:46:59.079Z: Workers have started successfully.
    Mar 02, 2022 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:46:59.150Z: Workers have started successfully.
    Mar 02, 2022 6:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:47:30.969Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 02, 2022 6:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:47:31.123Z: Cleaning up.
    Mar 02, 2022 6:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:47:31.200Z: Stopping worker pool...
    Mar 02, 2022 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:50:04.338Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 02, 2022 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T06:50:04.422Z: Worker pool stopped.
    Mar 02, 2022 6:50:10 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-01_22_45_30-838214627500681935 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7132e7f1-6694-4d44-a62e-df4444b3442d and timestamp: 2022-03-02T06:50:10.316000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.043

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 02, 2022 6:50:10 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 59.937 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 47s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/iev4jkhnw4byo

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3087

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3087/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #16484 from [BEAM-13633] [Playground] Implement

[noreply] Add 2022 events blog post (#16975)

[noreply] Clean up Go formatter suggestions (#16973)

[noreply] [BEAM-14012] Add go fmt to Github Actions (#16978)

[noreply] [BEAM-13911] Add basic tests to Go direct runner. (#16979)

[noreply] [BEAM-13960] Add support for more types when converting from between row


------------------------------------------
[...truncated 364.66 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is b79bd7a887a1fd865a143a3af492df65
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 6'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 6'
Successfully started process 'Gradle Test Executor 6'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 02, 2022 12:56:44 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 02, 2022 12:56:46 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 02, 2022 12:56:47 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 369 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 02, 2022 12:56:50 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 12:56:50 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 12:56:51 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 02, 2022 12:56:51 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 12:56:51 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 12:56:51 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 02, 2022 12:56:52 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@378817368]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 02, 2022 12:56:52 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 02, 2022 12:56:52 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 12:56:52 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 02, 2022 12:56:52 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 02, 2022 12:56:52 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 12:56:52 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 02, 2022 12:56:53 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1340362810]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 02, 2022 12:56:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 12:56:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 12:56:53 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 02, 2022 12:56:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 02, 2022 12:56:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 02, 2022 12:56:53 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 02, 2022 12:56:53 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 02, 2022 12:56:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 02, 2022 12:56:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 02, 2022 12:57:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 370 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 02, 2022 12:57:00 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 02, 2022 12:57:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1187668678744308794.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-XgaITbtMEGRShnM5WfB6vml9_GzT44fliC6AIR49wt8.jar
    Mar 02, 2022 12:57:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 369 files cached, 1 files newly uploaded in 1 seconds
    Mar 02, 2022 12:57:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 02, 2022 12:57:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144982 bytes, hash a469f96bc1e26b020ffd160d5c4df075a2e41cebddeb6242c15d11c9c4e68eae> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-pGn5a8HiawIP_RYNXE3wdaLkHOvd62JCwV0RycTmjq4.pb
    Mar 02, 2022 12:57:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 02, 2022 12:57:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 02, 2022 12:57:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 02, 2022 12:57:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 02, 2022 12:57:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-01_16_57_06-8277683359729565616?project=apache-beam-testing
    Mar 02, 2022 12:57:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-01_16_57_06-8277683359729565616
    Mar 02, 2022 12:57:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-01_16_57_06-8277683359729565616
    Mar 02, 2022 12:57:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-02T00:57:07.749Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 02, 2022 12:57:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:57:19.772Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 02, 2022 12:57:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:57:20.633Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 02, 2022 12:57:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:57:20.691Z: Expanding GroupByKey operations into optimizable parts.
    Mar 02, 2022 12:57:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:57:20.728Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 02, 2022 12:57:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:57:20.826Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 02, 2022 12:57:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:57:20.865Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 02, 2022 12:57:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:57:20.899Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 02, 2022 12:57:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:57:21.356Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 02, 2022 12:57:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:57:21.444Z: Starting 5 workers in us-central1-b...
    Mar 02, 2022 12:57:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:57:29.120Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 02, 2022 12:58:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:58:04.788Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 02, 2022 12:58:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:58:31.978Z: Workers have started successfully.
    Mar 02, 2022 12:58:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:58:32.071Z: Workers have started successfully.
    Mar 02, 2022 12:59:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:59:07.568Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 02, 2022 12:59:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:59:07.817Z: Cleaning up.
    Mar 02, 2022 12:59:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T00:59:07.957Z: Stopping worker pool...
    Mar 02, 2022 1:01:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T01:01:33.068Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 02, 2022 1:01:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-02T01:01:33.142Z: Worker pool stopped.
    Mar 02, 2022 1:01:42 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-01_16_57_06-8277683359729565616 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b5a51771-1ea3-4a19-9647-e0e01e2f99b6 and timestamp: 2022-03-02T01:01:42.694000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.423

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 02, 2022 1:01:42 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 6 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 5 mins 2.741 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 9m 8s
165 actionable tasks: 113 executed, 50 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/vgymj4akou24u

Stopped 5 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3086

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3086/display/redirect?page=changes>

Changes:

[egalpin] Use default context output rather than outputWithTimestamp for

[stranniknm] Palo Alto case study - fix link

[rogelio.hernandez] [BEAM-12777] Removed current docs version redirect

[noreply] Merge pull request #16850: [BEAM-11205] Upgrade Libraries BOM


------------------------------------------
[...truncated 396.47 KB...]
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigquery/2.8.0/da19ad3c5647a8cbe8ccf88c78f1c0c91ad695b3/google-cloud-bigquery-2.8.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigquery-2.8.0-xGfe5wbYeqJY2FxNhYRcvT2ChJrDIrqH18mx6phL9Ek.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-xds/1.44.0/4780c7dc39d77bdaf193c0ab82662f59ef870ea0/grpc-xds-1.44.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-xds-1.44.0-L6jU8qwOMP5Vn_1ELMXdY3dnsxGNgpKF2SShDuX8cNg.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.protobuf/protobuf-java-util/3.19.3/3e6812cbbb7e6faffa7b56438740dec510e1fc1a/protobuf-java-util-3.19.3.jar to gs://temp-storage-for-perf-tests/loadtests/staging/protobuf-java-util-3.19.3-HsBzPzLUn4GU-1RSYTA5NwYjcVc5VoptPJh9mngJwWw.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-grpclb/1.44.0/b70aa216f3addc259b3446b7b51e6a46dd9f7cf4/grpc-grpclb-1.44.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-grpclb-1.44.0-sk_iCMStMWwAaeBq95ZGuw0uRx0U1aysXRco1-VV2ak.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-core-http/2.4.0/7d1148b733ce94bea86ffe753d7aed2d103ca8c5/google-cloud-core-http-2.4.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-core-http-2.4.0-LefT_a41XzjwA43cs9AGfwiUpHrdYyu6NWS0gHEuf80.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-core/2.4.0/d8f474e9033586c4bb32bf4fdd15770758d2274/google-cloud-core-2.4.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-core-2.4.0-OjUKXcg0ZTpdYeqGeqpjENkVoQzLblb_BAKRuJt0Hyw.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api/gax-httpjson/0.96.0/6def3b984ad10b4ee243397c330c8810c32ac9ef/gax-httpjson-0.96.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gax-httpjson-0.96.0-Lv1Bua4llhkA3jISr6mgPMu5IC1P0QAHNG2Py53q66A.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-services/1.44.0/aa491bdacb62883b37d40c506cc815e9b36b310d/grpc-services-1.44.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-services-1.44.0-46RQGs-UKdsTax_YBoQsP64gS0lTgv0vXpQ6vyDvMhI.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-alts/1.44.0/366ec335bb023fdc08ac0990928fdb35638237d/grpc-alts-1.44.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-alts-1.44.0-BHppcJSf_YKvS3jLnnR_0xORYLwi7nKRzM5WXRBAmIs.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-bigtable-admin-v2/2.5.2/bf547a751eebccfb55990378fcc2f2b52e027f4b/grpc-google-cloud-bigtable-admin-v2-2.5.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-bigtable-admin-v2-2.5.2-1j2lisvA-7HhnzpAcelclfkU19i2S_jewG-xEuowgVg.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-bigquerystorage-v1/2.8.4/72afb65bfe27b8fff851a4e3cdc4c165f22263bd/proto-google-cloud-bigquerystorage-v1-2.8.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-bigquerystorage-v1-2.8.4-gKTPJtop8YfWmgSy4aNncoCv0HDe9ftAE2f-MjSurOU.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-bigtable-v2/2.5.2/d5175901a6a701301512e89930770de5cf888382/proto-google-cloud-bigtable-v2-2.5.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-bigtable-v2-2.5.2-u957sv6COUfE9qlxP7HFMu6tXD-2hjjPSvAIoiqe07k.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-bigtable-v2/2.5.2/5834a3f27af7945dc2f805e09c7376e2e2923d20/grpc-google-cloud-bigtable-v2-2.5.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-bigtable-v2-2.5.2-nsRLhF6l-4Clczr0F2B2E6c8ui-m0ELTDIWMjVu_2v8.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-bigtable-admin-v2/2.5.2/29db65fadc23c9977841976374c49638fd10eec9/proto-google-cloud-bigtable-admin-v2-2.5.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-bigtable-admin-v2-2.5.2-I1vo-zcT5DAicVgBz0sqSxFP3d5Rj4AFO75w3A6217w.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-pubsub-v1/1.97.2/fb5044570ea95da23540e873902e6b536328bf41/grpc-google-cloud-pubsub-v1-1.97.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-pubsub-v1-1.97.2-gAbYxnaZbK5eTWXfZEh6nXo_SQwpVv2fmr7LntQV2F0.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-firestore-v1/3.0.12/3e755ced47b8b8e735be133236bfd9732a072275/proto-google-cloud-firestore-v1-3.0.12.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-firestore-v1-3.0.12-_b2sDZG_lYzYl035ec-_75ZuBFbHr25pEweeTLM7LnA.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-pubsub-v1/1.97.2/f5c80458e430a40726abf747f4a983e69b5c1384/proto-google-cloud-pubsub-v1-1.97.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-pubsub-v1-1.97.2-HYgkssF2SwoiMovmYojUhKkSYK8oIbwwYNkX-rdFdZ8.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-pubsublite-v1/1.4.9/2ffdb00b108f7d8e4deb33b245a9e8e47c7f147e/proto-google-cloud-pubsublite-v1-1.4.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-pubsublite-v1-1.4.9-aESR-ST5vW8Bfcr2ncRbEnbHBpDY2AgeQ8YeG38Pvos.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api/gax/2.11.0/91ba35a0af71719b96070322d2ebc58fb598ab7c/gax-2.11.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gax-2.11.0-677J-FTQ5XcGgpNwgcgGNiFC4LX6cJN9caf5x5M8jgM.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.http-client/google-http-client-jackson2/1.41.2/8321186a583df76ad22f94c14b42130486c150f3/google-http-client-jackson2-1.41.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-http-client-jackson2-1.41.2-fH2YZbslX3sibsHLncvAbw1K3CgRwgoCKzq_QZSwMIo.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-spanner-admin-database-v1/6.18.0/3a1933afd5cdb8ae7ab83d58728016ec2ddf1b21/proto-google-cloud-spanner-admin-database-v1-6.18.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-spanner-admin-database-v1-6.18.0-ouTd0WcPO7MLXY2t4qEy3l0U21aN75J0VYGEwp1nvGs.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.auth/google-auth-library-oauth2-http/1.4.0/948a5c7b51d8f07accafd1be2b12698a22d39908/google-auth-library-oauth2-http-1.4.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-auth-library-oauth2-http-1.4.0-u_mbujOUxpnEiy5nJX36lrhMbuVMNIvUJslCB7hCVLo.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-pubsublite-v1/1.4.9/fd71af82a4214b7bae8546dc0cca76e231a0ed27/grpc-google-cloud-pubsublite-v1-1.4.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-pubsublite-v1-1.4.9-Ww-sI1VBlEhrCYgtGDT27lIk5vOk2BL4VPUhqLZGGSw.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-spanner-v1/6.18.0/2a958803dbeae93593a595be8b90264a0bd958b5/proto-google-cloud-spanner-v1-6.18.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-spanner-v1-6.18.0-gslER-pRJ4H7kscN21mqocg3peMZtLNj0eAH0zLYBIw.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.http-client/google-http-client-gson/1.41.2/7c00f50c7d2d7a5c89fc4dcce132eb431a2f19ae/google-http-client-gson-1.41.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-http-client-gson-1.41.2-2Zml43H1mcv1lZEFdlpr6VHB-tb06I5GfDJEKGZV0QA.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.http-client/google-http-client-protobuf/1.41.2/3e8b8249f756081998f6bc1f6bb0529ab8402e34/google-http-client-protobuf-1.41.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-http-client-protobuf-1.41.2-tI-yH_oXVlXtjHcIMVaWoD7vocQ3xEhygr2D0DLB9T0.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.http-client/google-http-client-apache-v2/1.41.2/1d2c0a2d5a48a1597ede7208871f445d3791a5f3/google-http-client-apache-v2-1.41.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-http-client-apache-v2-1.41.2-Oo7qAn1y85PhY2rzCjwdj5sxvwI4KYk84cFz8XKEIZM.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.http-client/google-http-client-appengine/1.41.2/ad3e12fcc5bd536d7621ea3c0efd9b6332e9c863/google-http-client-appengine-1.41.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-http-client-appengine-1.41.2-imfB_2RpVPe6wDzfJTJlZG4a5F0X62JQczld1t7Ey3U.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.http-client/google-http-client/1.41.2/a1f873cb28083311a69784aca987f658f1b707ce/google-http-client-1.41.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-http-client-1.41.2-hgxUbe9VYy22M5cgSAueerSU8gO64FbLuAytWZyHhtg.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-auth/1.44.0/acf028c160f27c8ff2c728b804199ef5bdfe3781/grpc-auth-1.44.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-auth-1.44.0-l54Lw3tenmuLSjlZW8Ng61D3bbiet9UFO2afgz8gpBk.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-netty-shaded/1.44.0/3aaab77b8a448ac35085a0016b973981931bc700/grpc-netty-shaded-1.44.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-netty-shaded-1.44.0-3lhPwqJ2ZuCvyVVkFd6nzIGagRQ1uG7rhctxopZoN3c.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-netty/1.44.0/d72c64d1efa954b2ce836e30f673183af10ac767/grpc-netty-1.44.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-netty-1.44.0-NMIjo4xaz6QClmugg1ZjlIIdfnCByT7rTfP0JN9iKhg.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.opencensus/opencensus-contrib-grpc-util/0.31.0/a8c2b0e839b303c82a54d7c809ea83e98b7e05ae/opencensus-contrib-grpc-util-0.31.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/opencensus-contrib-grpc-util-0.31.0-l8kn1nvbmsYqZeadun9FgW-jmMIpgaDwOvr568JFAqo.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-core/1.44.0/b68d6638bee520200b8121bec5427722f2e0ec04/grpc-core-1.44.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-core-1.44.0-PJU7bU6E134uECyf2czd7-3zYGlOGx-pFWk2moQVi50.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-spanner-admin-instance-v1/6.18.0/ecb3893db3778bf978c571a058cc58bd0d01b11f/grpc-google-cloud-spanner-admin-instance-v1-6.18.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-spanner-admin-instance-v1-6.18.0-vu2ddN-hUb5jBPhhCLTtBzftW8Y21DFbzHNYQ0uxzrs.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-stub/1.44.0/5ee58904e9be31aca2682340c8f05f2bc51ccb6c/grpc-stub-1.44.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-stub-1.44.0-s_bp-Vqfh5-Hgt-IRVmZ-NKJtPlCdNAfrzTibVC_gbY.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-common-protos/2.7.2/6d0630c75fc98cc0d7c6747e1cac10f80115d356/grpc-google-common-protos-2.7.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-common-protos-2.7.2-k5cxLa65qiY-52aNXKv5UcQsAnwsEpXsrNTxL04c4-s.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-protobuf/1.44.0/aa739c79bd405f4d0a78735d82e6de847c7eec0b/grpc-protobuf-1.44.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-protobuf-1.44.0-fMR3nuFJuqscPKHKzPt-DIbYmUT7z4TXVGCm5GF6Ixw.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-bigquerystorage-v1beta1/0.132.4/f9aafccd8fe351839537d9e1f3283a69c1677c0b/grpc-google-cloud-bigquerystorage-v1beta1-0.132.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-bigquerystorage-v1beta1-0.132.4-uAtILA5kdJAnCDj0sI9KGAgn2GbBwSqMbGC_8Y88PeM.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-spanner-v1/6.18.0/f27d7447fead10add4ba099001adbc6ffe388ba7/grpc-google-cloud-spanner-v1-6.18.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-spanner-v1-6.18.0-I0yrJfNaQcbQgv4SDQWIG8EoDgB7k7ElfA4Mgs_2X9A.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-bigquerystorage-v1beta2/0.132.4/2019df0650fc1c19626af1219608240f7d0063e3/grpc-google-cloud-bigquerystorage-v1beta2-0.132.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-bigquerystorage-v1beta2-0.132.4-xCDQ0xKdxnTHIOw7iEjG-g2TzU29A_olJOcp7aQGkh4.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-protobuf-lite/1.44.0/99bd18a0d30535ca47948359016e2ad1df92247c/grpc-protobuf-lite-1.44.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-protobuf-lite-1.44.0-gjH8q_SUslKE_V9QlM7Ub0sXx0ghg5EmudiRqztpw-I.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-spanner-admin-database-v1/6.18.0/e5b0380a799c0e8718120e465023eb2ebf5a6fca/grpc-google-cloud-spanner-admin-database-v1-6.18.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-spanner-admin-database-v1-6.18.0-fqdzeKLK0bC8ipMQ0t3NPs6GxQ7mKDLA710agMWUQsA.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-bigquerystorage-v1/2.8.4/191ca5396dbdda88bd84b099654ed3e270c6482d/grpc-google-cloud-bigquerystorage-v1-2.8.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-bigquerystorage-v1-2.8.4-f4_BMDQ34eI5ODxdJCHcJzb2MekLEu_v92RXXsLXDOw.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api/api-common/2.1.3/dd99b85655281139640b3479311856fc0c386a55/api-common-2.1.3.jar to gs://temp-storage-for-perf-tests/loadtests/staging/api-common-2.1.3-JCWfdn0y7hC0OL2Z8egd93sziMeRvAUB9gLIPU6adPU.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-api/1.44.0/c1e6c5359d1501affbfee4926d1e4c584619570/grpc-api-1.44.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-api-1.44.0-pTPZND-SNxdUTTJgBNHDCaL8DMI6pdZ3BnTEUQpsvt8.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/proto-google-cloud-firestore-bundle-v1/3.0.12/4e8c985e3641aeb999f692ec6364362e6baf3cff/proto-google-cloud-firestore-bundle-v1-3.0.12.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-firestore-bundle-v1-3.0.12-Q_w51NVLamOqDn0Ysmy9eIxOMxj4yMB5IniMZmqQHQ4.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.opencensus/opencensus-contrib-http-util/0.31.0/3c8c3ead38d762d7f50c5571b05baf724474c5a5/opencensus-contrib-http-util-0.31.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/opencensus-contrib-http-util-0.31.0-vMbNebAMLCqln8KgLUCUEIMAWFDrtS2X1jkI0253r9M.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-spanner-admin-instance-v1/6.18.0/5a0247843739f075d9a814446ed3351005132d28/proto-google-cloud-spanner-admin-instance-v1-6.18.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-spanner-admin-instance-v1-6.18.0-h0OTq08ydWVGm5ARZTRgowYH1BGPcJ7qG-zXj49rRKY.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-bigquerystorage-v1beta1/0.132.4/5d7062a48c6a0ee0a46f91e0403928bf58c6f2a9/proto-google-cloud-bigquerystorage-v1beta1-0.132.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-bigquerystorage-v1beta1-0.132.4-AL7wbHfRQyPYd6JnkujLA81xWmzl_7paj0y3LttbUN8.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-bigquerystorage-v1beta2/0.132.4/3a1f0aa60461d6ecabedae42270c12b62f1d219e/proto-google-cloud-bigquerystorage-v1beta2-0.132.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-bigquerystorage-v1beta2-0.132.4-0AuwWHa499KOKAWsaTsaGtyp5G-C-idOzyN3gfM2LDo.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.opencensus/opencensus-api/0.31.0/6634f10ecd5eb3ac248f3ed5ee727c9a28c841bd/opencensus-api-0.31.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/opencensus-api-0.31.0-cCulXXjznVUZXc8EH9-qt6dJCprEUBNUJIftnk06TSM.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-context/1.44.0/343f88cb505db0976f1632df5edd17105cc4b74a/grpc-context-1.44.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-context-1.44.0-2JrpWvyCVJ8dmyaHoICRJHdEmVrWcRMsUJ-nV97tNQY.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.auth/google-auth-library-credentials/1.4.0/50f543f1da68956b4dec792c64e2f8a2f1dcf376/google-auth-library-credentials-1.4.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-auth-library-credentials-1.4.0-atRBbZNgod86ZgxDST6WBUFt3U3pU_-7FN1YKlkbCaE.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-tcnative-boringssl-static/2.0.46.Final/5aaf1b7bf74dfe4ae8458c308b616e05f98403f3/netty-tcnative-boringssl-static-2.0.46.Final.jar to gs://temp-storage-for-perf-tests/loadtests/staging/netty-tcnative-boringssl-static-2.0.46.Final-FwQDYE39opHKVimQF1gy_JrXfQ10KVl2YEGOsOWzRQc.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler-proxy/4.1.72.Final/90de0fe610454d4296052ae36acb8a6a1d0333f1/netty-handler-proxy-4.1.72.Final.jar to gs://temp-storage-for-perf-tests/loadtests/staging/netty-handler-proxy-4.1.72.Final-zhEjJQRQ0rF4FXSBd_02hDaQuM1wO0fm3BLRWdhtow0.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.72.Final/a8f062d67303a5e4b2bc2ad48fb4fd8c99108e45/netty-codec-http-4.1.72.Final.jar to gs://temp-storage-for-perf-tests/loadtests/staging/netty-codec-http-4.1.72.Final--m_siAEL-vanQVtTZGcbaxj_trNamGq5e0I_2MOgF0s.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.72.Final/4b269e666fbace27d2c1efa57703e99b09655822/netty-codec-http2-4.1.72.Final.jar to gs://temp-storage-for-perf-tests/loadtests/staging/netty-codec-http2-4.1.72.Final-yJpwUA9Z6FY-cgqqgIJjpRS9nivZG6hOq4wsy0XyNLI.jar
    Mar 01, 2022 6:51:50 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.protobuf/protobuf-java/3.19.3/4b57f1b1b9e281231c3fcfc039ce3021e29ff570/protobuf-java-3.19.3.jar to gs://temp-storage-for-perf-tests/loadtests/staging/protobuf-java-3.19.3-RAZC47UmZj0cFbSiBxwmlvKiXwMTMqw-tD9jlp2Bmz0.jar
    Mar 01, 2022 6:51:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-iam-v1/1.2.1/47af1311cc0e2997ed808d02751f79dbee3877e3/proto-google-iam-v1-1.2.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-iam-v1-1.2.1-duyZ_Faoz2nDsITI9Fi0f5BJO7dqD7m_H64zf5_CNw8.jar
    Mar 01, 2022 6:51:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-common-protos/2.7.2/dd3e63a78e976e6ebd9bcf561e3315032df7612a/proto-google-common-protos-2.7.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-common-protos-2.7.2-BAj3I9xGHIOLiOiBG8Y5n2Amfz_N78124a63gsIAY-w.jar
    Mar 01, 2022 6:51:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-datastore-v1/0.93.4/24123d85c5bba9ac43b5105f1cc3f377125b0cee/proto-google-cloud-datastore-v1-0.93.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-datastore-v1-0.93.4-_FXGS30ruef4MnPhWdFIiCWu6Y4JodYODsBQzDEx72k.jar
    Mar 01, 2022 6:51:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.auto.value/auto-value-annotations/1.9/25a0fcef915f663679fcdb447541c5d86a9be4ba/auto-value-annotations-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/auto-value-annotations-1.9--lRp9MRO5Zii2PAzqwqdy8ZJigxeDJmN-gwq31E1gEQ.jar
    Mar 01, 2022 6:51:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-socks/4.1.72.Final/d3bc427b6e2dc4bb6dc9d18d1cc47f8530970a8b/netty-codec-socks-4.1.72.Final.jar to gs://temp-storage-for-perf-tests/loadtests/staging/netty-codec-socks-4.1.72.Final-M8TNE0hYqrxa_l3EQMiIAYYBfcTcVB80yOFr_8VUbQc.jar
    Mar 01, 2022 6:51:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-buffer/4.1.72.Final/f306eec3f79541f9b8af9c471a0d5b63b7996272/netty-buffer-4.1.72.Final.jar to gs://temp-storage-for-perf-tests/loadtests/staging/netty-buffer-4.1.72.Final-Vo_3zZ2OIoTsmAcwyIkk9oZkKSn48hmnRRi05kdV86E.jar
    Mar 01, 2022 6:51:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-tcnative-classes/2.0.46.Final/9937a832d9c19861822d345b48ced388b645aa5f/netty-tcnative-classes-2.0.46.Final.jar to gs://temp-storage-for-perf-tests/loadtests/staging/netty-tcnative-classes-2.0.46.Final-0-yIjcxKx5Fb-ItBfF4E_TVPQxEDKnSKaILfCTR-7Zo.jar
    Mar 01, 2022 6:51:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.72.Final/a55bac9c3af5f59828207b551a96ac19bbfc341e/netty-common-4.1.72.Final.jar to gs://temp-storage-for-perf-tests/loadtests/staging/netty-common-4.1.72.Final-ittMKRJgzrKFmmjEnwre7Ta_SVh2COK4Hs_2qvBgJek.jar
    Mar 01, 2022 6:51:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.72.Final/613c4019d687db4e9a5532564e442f83c4474ed7/netty-codec-4.1.72.Final.jar to gs://temp-storage-for-perf-tests/loadtests/staging/netty-codec-4.1.72.Final-XYWRyicaHpwiTo3jhzqpk2rLWB7g21FOfcGFI9820Ww.jar
    Mar 01, 2022 6:51:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver/4.1.72.Final/4ff458458ea32ed1156086820b624a815fcbf2c0/netty-resolver-4.1.72.Final.jar to gs://temp-storage-for-perf-tests/loadtests/staging/netty-resolver-4.1.72.Final-ZHRZiqt8ydjWz6BsBb0bGa2_f4RR291zBwszpsYLG5A.jar
    Mar 01, 2022 6:51:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport/4.1.72.Final/99138b436a584879355aca8fe3c64b46227d5d79/netty-transport-4.1.72.Final.jar to gs://temp-storage-for-perf-tests/loadtests/staging/netty-transport-4.1.72.Final-xfto6aZbbopRat_Ln6MjR57ntNlEnYpSnS7Ks9NxHVo.jar
    Mar 01, 2022 6:51:54 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 292 files cached, 78 files newly uploaded in 4 seconds
    Mar 01, 2022 6:51:54 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 01, 2022 6:51:54 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144982 bytes, hash 7e6899e0f5fc9936494f252158b8cf17e6f5eaad6a8fdefbe76f3b513f8f6fc5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-fmiZ4PX8mTZJTyUhWLjPF-b16q1qj9775287UT-Pb8U.pb
    Mar 01, 2022 6:51:57 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 01, 2022 6:51:57 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 01, 2022 6:51:57 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 01, 2022 6:51:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 01, 2022 6:51:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-01_10_51_57-9387479801781556235?project=apache-beam-testing
    Mar 01, 2022 6:51:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-01_10_51_57-9387479801781556235
    Mar 01, 2022 6:51:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-01_10_51_57-9387479801781556235
    Mar 01, 2022 6:52:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-01T18:51:58.973Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 01, 2022 6:52:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:52:15.753Z: Worker configuration: e2-standard-2 in us-central1-b.
    Mar 01, 2022 6:52:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:52:16.566Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 01, 2022 6:52:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:52:16.634Z: Expanding GroupByKey operations into optimizable parts.
    Mar 01, 2022 6:52:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:52:16.661Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 01, 2022 6:52:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:52:16.723Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 01, 2022 6:52:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:52:16.750Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 01, 2022 6:52:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:52:16.782Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 01, 2022 6:52:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:52:17.102Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 01, 2022 6:52:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:52:17.181Z: Starting 5 workers in us-central1-b...
    Mar 01, 2022 6:52:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:52:43.544Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 01, 2022 6:53:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:53:09.149Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 01, 2022 6:53:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:53:33.051Z: Workers have started successfully.
    Mar 01, 2022 6:53:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:53:33.078Z: Workers have started successfully.
    Mar 01, 2022 6:54:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:54:06.882Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 01, 2022 6:54:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:54:07.030Z: Cleaning up.
    Mar 01, 2022 6:54:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:54:07.113Z: Stopping worker pool...
    Mar 01, 2022 6:56:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:56:38.659Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 01, 2022 6:56:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T18:56:38.694Z: Worker pool stopped.
    Mar 01, 2022 6:56:52 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-01_10_51_57-9387479801781556235 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7ef9565f-ad7b-484a-80c6-864db365c3a1 and timestamp: 2022-03-01T18:56:52.172000000Z:
                     Metric:                    Value:
                   read_time                     8.947
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 01, 2022 6:56:52 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 8 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 5 mins 16.782 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 12m 30s
165 actionable tasks: 117 executed, 46 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/2ddswhvhl5j46

Stopped 7 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure


---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3085

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3085/display/redirect?page=changes>

Changes:

[noreply] [BEAM-9150] Fix beam_PostRelease_Python_Candidate (python RC validation


------------------------------------------
[...truncated 346.63 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 01, 2022 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 01, 2022 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 01, 2022 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 01, 2022 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 01, 2022 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 01, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 01, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 01, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 01, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 01, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 01, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 01, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 01, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@224885737]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 01, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 01, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 01, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 01, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 01, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 01, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 01, 2022 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 01, 2022 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 01, 2022 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8221861915849533412.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-0CDMIqGdhdb860ltMog4Xg3iNf3sI2lpa6OB6AegHFg.jar
    Mar 01, 2022 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 01, 2022 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 1 files newly uploaded in 0 seconds
    Mar 01, 2022 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 01, 2022 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash 25c5b0b6a36e41425f4dcccf75888aebeaa9dadf7b895294458d5e34ccc33042> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-JcWwtqNuQUJfTczPdYiK6-qp2t97iVKURY1eNMzDMEI.pb
    Mar 01, 2022 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 01, 2022 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 01, 2022 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 01, 2022 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 01, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-03-01_04_45_12-3741401754893246781?project=apache-beam-testing
    Mar 01, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-03-01_04_45_12-3741401754893246781
    Mar 01, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-03-01_04_45_12-3741401754893246781
    Mar 01, 2022 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-01T12:45:16.713Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 01, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:45:27.902Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 01, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:45:28.696Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 01, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:45:28.729Z: Expanding GroupByKey operations into optimizable parts.
    Mar 01, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:45:28.790Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 01, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:45:28.854Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 01, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:45:28.898Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 01, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:45:28.929Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 01, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:45:29.265Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 01, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:45:29.338Z: Starting 5 workers in us-central1-f...
    Mar 01, 2022 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:45:54.981Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 01, 2022 12:46:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:46:11.383Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Mar 01, 2022 12:46:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:46:11.415Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Mar 01, 2022 12:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:46:21.720Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 01, 2022 12:46:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:46:44.319Z: Workers have started successfully.
    Mar 01, 2022 12:46:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:46:44.357Z: Workers have started successfully.
    Mar 01, 2022 12:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:47:18.712Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 01, 2022 12:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:47:18.877Z: Cleaning up.
    Mar 01, 2022 12:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:47:18.963Z: Stopping worker pool...
    Mar 01, 2022 12:49:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:49:47.635Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 01, 2022 12:49:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T12:49:47.685Z: Worker pool stopped.
    Mar 01, 2022 12:49:54 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-03-01_04_45_12-3741401754893246781 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 498a8e77-3fe3-477d-8aa8-91a4a244056e and timestamp: 2022-03-01T12:49:54.068000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.957

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 01, 2022 12:49:54 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 5 mins 2.358 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 32s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ofahgchzfn46a

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3084

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3084/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13917] Improve coverage of databaseio package (#16956)

[noreply] [BEAM-13925] Add entry files to process new prs and pr updates for PR

[noreply] [BEAM-13899] Improve coverage of debug package (#16951)

[noreply] [BEAM-13907] Improve coverage of textio package (#16937)


------------------------------------------
[...truncated 346.51 KB...]
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 01, 2022 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 01, 2022 6:45:02 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 01, 2022 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 01, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 01, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 01, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 01, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 01, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 01, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 01, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 01, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 01, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 01, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1030952115]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 01, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 01, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 01, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 01, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 01, 2022 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 01, 2022 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 01, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 01, 2022 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 01, 2022 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 01, 2022 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5178301128500772947.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-jPd_XCnMJSNeqdamgzibhfshcbrpctI2kbcJgNGGOp4.jar
    Mar 01, 2022 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/core-construction-java/build/libs/beam-runners-core-construction-java-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-core-construction-java-2.38.0-SNAPSHOT-9wRDS_Pp0L_QPmqz4faMhuo4233HNbUF3o1fyfVb8w4.jar
    Mar 01, 2022 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/core-construction-java/build/libs/beam-runners-core-construction-java-2.38.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-core-construction-java-2.38.0-SNAPSHOT-tests-HNY65Ru46tWIHr1I8Vy-Hhfady_9B5Q82P9glOk2cps.jar
    Mar 01, 2022 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/model/pipeline/build/libs/beam-model-pipeline-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-model-pipeline-2.38.0-SNAPSHOT-eI8Izyqyj8Eg986VCh_XCV375giHizstOO7g5dS6dyM.jar
    Mar 01, 2022 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 365 files cached, 4 files newly uploaded in 0 seconds
    Mar 01, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 01, 2022 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash 86c3c47dc60fa33a7e8f9bdaec3672183d58e33ec8acae9e0bd04cf6edac79e1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-hsPEfcYPozp-j5va7DZyGD1Y4z7IrK6eC9BM9u2seeE.pb
    Mar 01, 2022 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 01, 2022 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 01, 2022 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 01, 2022 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 01, 2022 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-28_22_45_17-17846470865135814897?project=apache-beam-testing
    Mar 01, 2022 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-28_22_45_17-17846470865135814897
    Mar 01, 2022 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-28_22_45_17-17846470865135814897
    Mar 01, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-01T06:45:19.001Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 01, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:45:31.836Z: Worker configuration: e2-standard-2 in us-central1-c.
    Mar 01, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:45:32.758Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 01, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:45:32.892Z: Expanding GroupByKey operations into optimizable parts.
    Mar 01, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:45:32.963Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 01, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:45:33.040Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 01, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:45:33.076Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 01, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:45:33.102Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 01, 2022 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:45:33.423Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 01, 2022 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:45:33.493Z: Starting 5 workers in us-central1-c...
    Mar 01, 2022 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:45:59.330Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 01, 2022 6:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:46:17.864Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 01, 2022 6:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:46:42.301Z: Workers have started successfully.
    Mar 01, 2022 6:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:46:42.328Z: Workers have started successfully.
    Mar 01, 2022 6:47:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:47:16.363Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 01, 2022 6:47:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:47:17.662Z: Cleaning up.
    Mar 01, 2022 6:47:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:47:18.250Z: Stopping worker pool...
    Mar 01, 2022 6:49:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:49:45.711Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 01, 2022 6:49:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T06:49:45.884Z: Worker pool stopped.
    Mar 01, 2022 6:49:53 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-28_22_45_17-17846470865135814897 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8698a46c-5691-4bca-b455-c9045e43db93 and timestamp: 2022-03-01T06:49:53.881000000Z:
                     Metric:                    Value:
                   read_time                     7.775
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 01, 2022 6:49:53 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 56.977 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 27s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ay7dpbcilihva

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3083

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3083/display/redirect?page=changes>

Changes:

[noreply] Fix ignored exception in BatchSpannerRead. (#16960)


------------------------------------------
[...truncated 358.57 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Mar 01, 2022 12:47:58 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Mar 01, 2022 12:48:00 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Mar 01, 2022 12:48:02 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Mar 01, 2022 12:48:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 01, 2022 12:48:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 12:48:10 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 01, 2022 12:48:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 01, 2022 12:48:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 12:48:10 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 01, 2022 12:48:11 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Mar 01, 2022 12:48:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 01, 2022 12:48:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 12:48:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 01, 2022 12:48:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Mar 01, 2022 12:48:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 12:48:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 01, 2022 12:48:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@604047477]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 01, 2022 12:48:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 01, 2022 12:48:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 12:48:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Mar 01, 2022 12:48:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Mar 01, 2022 12:48:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Mar 01, 2022 12:48:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Mar 01, 2022 12:48:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Mar 01, 2022 12:48:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Mar 01, 2022 12:48:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Mar 01, 2022 12:48:24 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Mar 01, 2022 12:48:24 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Mar 01, 2022 12:48:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2920853847341783018.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Q8CxeHR_KbGkPVxQJnk1fqzVCIuesgrTmCGTf9PuNk0.jar
    Mar 01, 2022 12:48:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 1 files newly uploaded in 4 seconds
    Mar 01, 2022 12:48:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Mar 01, 2022 12:48:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash 825e1ef99693ab679a1ed1eba460e94cc99de478b8a1f7ef354bdd61e8143655> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-gl4e-ZaTq2eaHtHrpGDpTMmd5Hi4offvNUvdYegUNlU.pb
    Mar 01, 2022 12:48:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Mar 01, 2022 12:48:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Mar 01, 2022 12:48:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Mar 01, 2022 12:48:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Mar 01, 2022 12:48:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-28_16_48_35-18417915937622314850?project=apache-beam-testing
    Mar 01, 2022 12:48:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-28_16_48_35-18417915937622314850
    Mar 01, 2022 12:48:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-28_16_48_35-18417915937622314850
    Mar 01, 2022 12:48:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-03-01T00:48:39.010Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Mar 01, 2022 12:48:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:48:49.505Z: Worker configuration: e2-standard-2 in us-central1-f.
    Mar 01, 2022 12:48:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:48:50.260Z: Expanding CoGroupByKey operations into optimizable parts.
    Mar 01, 2022 12:48:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:48:50.386Z: Expanding GroupByKey operations into optimizable parts.
    Mar 01, 2022 12:48:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:48:50.448Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Mar 01, 2022 12:48:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:48:50.563Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Mar 01, 2022 12:48:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:48:50.603Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Mar 01, 2022 12:48:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:48:50.634Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Mar 01, 2022 12:48:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:48:51.200Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 01, 2022 12:48:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:48:51.291Z: Starting 5 workers in us-central1-f...
    Mar 01, 2022 12:49:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:49:04.701Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Mar 01, 2022 12:49:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:49:34.956Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Mar 01, 2022 12:50:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:49:59.683Z: Workers have started successfully.
    Mar 01, 2022 12:50:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:49:59.869Z: Workers have started successfully.
    Mar 01, 2022 12:50:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:50:38.551Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Mar 01, 2022 12:50:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:50:41.154Z: Cleaning up.
    Mar 01, 2022 12:50:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:50:42.443Z: Stopping worker pool...
    Mar 01, 2022 12:53:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:53:03.627Z: Autoscaling: Resized worker pool from 5 to 0.
    Mar 01, 2022 12:53:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-03-01T00:53:03.684Z: Worker pool stopped.
    Mar 01, 2022 12:53:12 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-28_16_48_35-18417915937622314850 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 26b2fedd-e5c8-4dd9-bbf0-e1cb08f6f666 and timestamp: 2022-03-01T00:53:12.515000000Z:
                     Metric:                    Value:
                   read_time                     8.964
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Mar 01, 2022 12:53:12 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.067 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.27 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 5 mins 24.16 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 34s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/qyxdvsgvkrn3w

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3082

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3082/display/redirect?page=changes>

Changes:

[noreply] Build wheels for Python 3.9

[noreply] Merge pull request #16892 from [BEAM-13755] [Playground] Scroll the

[noreply] Merge pull request #16880 from [BEAM-13963][Playground] Get bucket name

[noreply] Merge pull request #16870 from [BEAM-13874][Playground] Tag multifile

[noreply] Merge pull request #16910 from [BEAM-13724] [Playground] Get the default

[noreply] [BEAM-14008] Fix incorrect guava import (#16966)


------------------------------------------
[...truncated 355.97 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 28, 2022 6:49:09 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 28, 2022 6:49:10 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 28, 2022 6:49:10 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 28, 2022 6:49:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 6:49:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 6:49:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 28, 2022 6:49:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 6:49:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 6:49:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 28, 2022 6:49:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 28, 2022 6:49:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 28, 2022 6:49:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 6:49:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 28, 2022 6:49:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 28, 2022 6:49:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 6:49:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 28, 2022 6:49:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1030952115]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 28, 2022 6:49:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 6:49:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 6:49:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 28, 2022 6:49:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 6:49:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 6:49:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 28, 2022 6:49:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 28, 2022 6:49:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 28, 2022 6:49:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 28, 2022 6:49:24 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 28, 2022 6:49:24 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-xV1aRoDJQlfeDUqWr6hu5a45VhrhjeNd1H_B3k29wbY.jar
    Feb 28, 2022 6:49:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1216007316910989002.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-_VZPDf5bJzaG_RCCnWxA1A_9mVOn2WQcK94JIp7m82U.jar
    Feb 28, 2022 6:49:25 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 1 files newly uploaded in 0 seconds
    Feb 28, 2022 6:49:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 28, 2022 6:49:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash fa0a50906d8c970896db16b1f22d763adf31b6eef04ba98240da33244e384fb0> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline--gpQkG2MlwiW2xax8i12Ot8xtu7wS6mCQNozJE44T7A.pb
    Feb 28, 2022 6:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 28, 2022 6:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 28, 2022 6:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 28, 2022 6:49:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 28, 2022 6:49:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-28_10_49_29-15599778114626035129?project=apache-beam-testing
    Feb 28, 2022 6:49:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-28_10_49_29-15599778114626035129
    Feb 28, 2022 6:49:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-28_10_49_29-15599778114626035129
    Feb 28, 2022 6:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-28T18:49:30.088Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 28, 2022 6:49:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:49:40.588Z: Worker configuration: e2-standard-2 in us-central1-b.
    Feb 28, 2022 6:49:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:49:41.468Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 28, 2022 6:49:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:49:41.511Z: Expanding GroupByKey operations into optimizable parts.
    Feb 28, 2022 6:49:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:49:41.539Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 28, 2022 6:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:49:41.696Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 28, 2022 6:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:49:41.759Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 28, 2022 6:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:49:41.803Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 28, 2022 6:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:49:42.159Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 28, 2022 6:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:49:42.254Z: Starting 5 workers in us-central1-b...
    Feb 28, 2022 6:49:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:49:51.399Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 28, 2022 6:50:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:50:19.683Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Feb 28, 2022 6:50:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:50:19.712Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Feb 28, 2022 6:50:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:50:30.059Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 28, 2022 6:50:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:50:53.662Z: Workers have started successfully.
    Feb 28, 2022 6:50:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:50:53.695Z: Workers have started successfully.
    Feb 28, 2022 6:51:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:51:27.029Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 28, 2022 6:51:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:51:27.169Z: Cleaning up.
    Feb 28, 2022 6:51:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:51:27.235Z: Stopping worker pool...
    Feb 28, 2022 6:53:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:53:55.509Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 28, 2022 6:53:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T18:53:55.558Z: Worker pool stopped.
    Feb 28, 2022 6:54:03 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-28_10_49_29-15599778114626035129 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 83918a1a-63a1-4369-a7c3-d8e4ed7b0f36 and timestamp: 2022-02-28T18:54:03.588000000Z:
                     Metric:                    Value:
                   read_time                    10.001
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 28, 2022 6:54:03 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.117 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 58.631 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 10s
165 actionable tasks: 105 executed, 58 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/wm35wv3ysrdou

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3081

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3081/display/redirect>

Changes:


------------------------------------------
[...truncated 349.70 KB...]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 28, 2022 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 28, 2022 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 28, 2022 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 28, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 28, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 28, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 28, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 28, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 28, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 28, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 28, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@224885737]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 28, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 28, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 28, 2022 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 28, 2022 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 28, 2022 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 28, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 28, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-x3bfOybJFOvSiYOWM7NAHEsEH3vzhI1pMEXkIwQeQuc.jar
    Feb 28, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test9078476468718900805.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Sv6YVyBVwZm_9akbl9hkeNzeanu-FE-jcjeAwY-rWmg.jar
    Feb 28, 2022 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 1 files newly uploaded in 6 seconds
    Feb 28, 2022 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 28, 2022 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash 2de9b2ae8998f8a1e1c5e21108dc9527eb7d93a7f9bce323139589e3b5ed08d6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-LemyromY-KHhxeIRCNyVJ-t9k6f5vOMjE5WJ47XtCNY.pb
    Feb 28, 2022 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 28, 2022 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 28, 2022 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 28, 2022 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 28, 2022 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-28_04_45_14-18043986961120361025?project=apache-beam-testing
    Feb 28, 2022 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-28_04_45_14-18043986961120361025
    Feb 28, 2022 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-28_04_45_14-18043986961120361025
    Feb 28, 2022 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-28T12:45:15.215Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 28, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:45:34.489Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 28, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:45:35.342Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 28, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:45:35.388Z: Expanding GroupByKey operations into optimizable parts.
    Feb 28, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:45:35.443Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 28, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:45:35.525Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 28, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:45:35.557Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 28, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:45:35.589Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 28, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:45:36.012Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 28, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:45:36.077Z: Starting 5 workers in us-central1-a...
    Feb 28, 2022 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:45:40.574Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 28, 2022 12:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:46:06.871Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Feb 28, 2022 12:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:46:06.958Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Feb 28, 2022 12:46:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:46:17.303Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Feb 28, 2022 12:46:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:46:17.330Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Feb 28, 2022 12:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:46:27.600Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 28, 2022 12:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:46:41.225Z: Workers have started successfully.
    Feb 28, 2022 12:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:46:41.393Z: Workers have started successfully.
    Feb 28, 2022 12:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:47:14.385Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 28, 2022 12:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:47:14.877Z: Cleaning up.
    Feb 28, 2022 12:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:47:14.946Z: Stopping worker pool...
    Feb 28, 2022 12:49:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:49:41.002Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 28, 2022 12:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T12:49:41.051Z: Worker pool stopped.
    Feb 28, 2022 12:49:48 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-28_04_45_14-18043986961120361025 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8511c389-5ca0-4e86-b0d2-2cdb7382ba29 and timestamp: 2022-02-28T12:49:48.355000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.203

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 28, 2022 12:49:48 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 5 mins 0.711 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/624dpqn3y4hdy

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3080

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3080/display/redirect>

Changes:


------------------------------------------
[...truncated 345.89 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is be2b2428c48266ebc3005438746ec79d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 28, 2022 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 28, 2022 6:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 28, 2022 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 28, 2022 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 28, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 28, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 28, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 28, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 28, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 28, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 28, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1030952115]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 28, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 28, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 28, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 28, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 28, 2022 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 28, 2022 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 28, 2022 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-x3bfOybJFOvSiYOWM7NAHEsEH3vzhI1pMEXkIwQeQuc.jar
    Feb 28, 2022 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7676323044353042064.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test--8sL2ZaLR7OeMuc86dJeyVTzOTDVmy7AqK_UrBTESQI.jar
    Feb 28, 2022 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 1 files newly uploaded in 0 seconds
    Feb 28, 2022 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 28, 2022 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash 82f5ff932a7cb309ae9b72bb4e2d1552a92bba45b909a7e711395f336f2473a3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-gvX_kyp8swmum3K7Ti0VUqkrukW5CafnETlfM28kc6M.pb
    Feb 28, 2022 6:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 28, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 28, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 28, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 28, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-27_22_45_07-11142001713100249282?project=apache-beam-testing
    Feb 28, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-27_22_45_07-11142001713100249282
    Feb 28, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-27_22_45_07-11142001713100249282
    Feb 28, 2022 6:45:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-28T06:45:08.364Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 28, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:45:18.884Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 28, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:45:19.615Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 28, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:45:19.656Z: Expanding GroupByKey operations into optimizable parts.
    Feb 28, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:45:19.674Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 28, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:45:19.744Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 28, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:45:19.787Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 28, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:45:19.819Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 28, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:45:20.139Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 28, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:45:20.218Z: Starting 5 workers in us-central1-c...
    Feb 28, 2022 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:45:39.245Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 28, 2022 6:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:46:03.560Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 28, 2022 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:46:29.237Z: Workers have started successfully.
    Feb 28, 2022 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:46:29.265Z: Workers have started successfully.
    Feb 28, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:47:03.692Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 28, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:47:03.891Z: Cleaning up.
    Feb 28, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:47:03.971Z: Stopping worker pool...
    Feb 28, 2022 6:49:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:49:28.314Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 28, 2022 6:49:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T06:49:28.367Z: Worker pool stopped.
    Feb 28, 2022 6:49:33 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-27_22_45_07-11142001713100249282 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): bb01438e-0626-4dd4-bcdf-fde29cfa5fcf and timestamp: 2022-02-28T06:49:33.759000000Z:
                     Metric:                    Value:
                   read_time                     9.857
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 28, 2022 6:49:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.039 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 46.461 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/t2qxtipfimhxy

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3079

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3079/display/redirect>

Changes:


------------------------------------------
[...truncated 348.87 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 28, 2022 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 28, 2022 12:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 28, 2022 12:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 28, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 28, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 28, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 28, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 28, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 28, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 28, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 28, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1030952115]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 28, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 28, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 28, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 28, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 28, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 28, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 28, 2022 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 28, 2022 12:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 28, 2022 12:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-x3bfOybJFOvSiYOWM7NAHEsEH3vzhI1pMEXkIwQeQuc.jar
    Feb 28, 2022 12:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3749933221429416062.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-K8p_4b-WEwt3hztuYwABqE-03NEMzbwn15UNjCYyw6Y.jar
    Feb 28, 2022 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 1 files newly uploaded in 1 seconds
    Feb 28, 2022 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 28, 2022 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash ba4f9dd7179e79b865fed3caa08bbb510eeb609f67a7442cc50452bf2a9e0508> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-uk-d1xeeebhl_tPKoIu7UQ7rYJ9np0QsxQRSvyqeBQg.pb
    Feb 28, 2022 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 28, 2022 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 28, 2022 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 28, 2022 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 28, 2022 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-27_16_45_09-6046317259428152179?project=apache-beam-testing
    Feb 28, 2022 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-27_16_45_09-6046317259428152179
    Feb 28, 2022 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-27_16_45_09-6046317259428152179
    Feb 28, 2022 12:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-28T00:45:10.214Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 28, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:45:20.405Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 28, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:45:21.088Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 28, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:45:21.129Z: Expanding GroupByKey operations into optimizable parts.
    Feb 28, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:45:21.166Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 28, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:45:21.224Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 28, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:45:21.254Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 28, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:45:21.277Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 28, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:45:21.638Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 28, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:45:21.714Z: Starting 5 workers in us-central1-a...
    Feb 28, 2022 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:45:46.304Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 28, 2022 12:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:45:52.390Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Feb 28, 2022 12:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:45:52.421Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Feb 28, 2022 12:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:46:02.683Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 28, 2022 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:46:25.515Z: Workers have started successfully.
    Feb 28, 2022 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:46:25.542Z: Workers have started successfully.
    Feb 28, 2022 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:46:54.481Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 28, 2022 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:46:54.607Z: Cleaning up.
    Feb 28, 2022 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:46:54.671Z: Stopping worker pool...
    Feb 28, 2022 12:49:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:49:19.400Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 28, 2022 12:49:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-28T00:49:19.463Z: Worker pool stopped.
    Feb 28, 2022 12:49:25 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-27_16_45_09-6046317259428152179 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 27b4d432-086e-4af1-8d1c-a8ff722d2423 and timestamp: 2022-02-28T00:49:25.483000000Z:
                     Metric:                    Value:
                   read_time                     7.108
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 28, 2022 12:49:25 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.02 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 36.679 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/7modrwniondhu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3078

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3078/display/redirect>

Changes:


------------------------------------------
[...truncated 346.30 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is be2b2428c48266ebc3005438746ec79d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 27, 2022 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 27, 2022 6:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 27, 2022 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 27, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 27, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 27, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1030952115]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 27, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 27, 2022 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 27, 2022 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 27, 2022 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-x3bfOybJFOvSiYOWM7NAHEsEH3vzhI1pMEXkIwQeQuc.jar
    Feb 27, 2022 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3732117651599387817.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-4ZCojxlIyW7q4IMO12JlgNS8XSGVxofJj4rg5QJAP3A.jar
    Feb 27, 2022 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 1 files newly uploaded in 0 seconds
    Feb 27, 2022 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 27, 2022 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash 5357ad6b89c6c1f0fcd29b8584836b7a388c0c5f9db3da569af6fc405671a6ee> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-U1eta4nGwfD80puFhINrejiMDF-ds9pWmvb8QFZxpu4.pb
    Feb 27, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 27, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 27, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 27, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 27, 2022 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-27_10_45_10-439597236310966335?project=apache-beam-testing
    Feb 27, 2022 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-27_10_45_10-439597236310966335
    Feb 27, 2022 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-27_10_45_10-439597236310966335
    Feb 27, 2022 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-27T18:45:12.209Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 27, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:45:23.839Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 27, 2022 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:45:24.741Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 27, 2022 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:45:24.804Z: Expanding GroupByKey operations into optimizable parts.
    Feb 27, 2022 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:45:24.837Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 27, 2022 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:45:24.911Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 27, 2022 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:45:24.940Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 27, 2022 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:45:24.975Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 27, 2022 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:45:25.354Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 27, 2022 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:45:25.424Z: Starting 5 workers in us-central1-a...
    Feb 27, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:45:29.871Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 27, 2022 6:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:46:04.281Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 27, 2022 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:46:29.347Z: Workers have started successfully.
    Feb 27, 2022 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:46:29.380Z: Workers have started successfully.
    Feb 27, 2022 6:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:46:57.241Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 27, 2022 6:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:46:57.416Z: Cleaning up.
    Feb 27, 2022 6:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:46:57.521Z: Stopping worker pool...
    Feb 27, 2022 6:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:49:21.436Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 27, 2022 6:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T18:49:21.478Z: Worker pool stopped.
    Feb 27, 2022 6:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-27_10_45_10-439597236310966335 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0b818e94-6712-41bf-b1f0-6d81d320d1b7 and timestamp: 2022-02-27T18:49:27.870000000Z:
                     Metric:                    Value:
                   read_time                     6.811
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 27, 2022 6:49:27 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 38.242 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 4s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/mqyprijnyqvu2

Build cache (/home/jenkins/.gradle/caches/build-cache-1) removing files not accessed on or after Sun Feb 20 18:44:27 UTC 2022.
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleanup deleted 6 files/directories.
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleaned up in 0.525 secs.
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3077

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3077/display/redirect>

Changes:


------------------------------------------
[...truncated 346.28 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is be2b2428c48266ebc3005438746ec79d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 27, 2022 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 27, 2022 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 27, 2022 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 27, 2022 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 27, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 27, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 27, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 27, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 27, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 27, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 27, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1030952115]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 27, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 27, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 27, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 27, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 27, 2022 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 27, 2022 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 27, 2022 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-x3bfOybJFOvSiYOWM7NAHEsEH3vzhI1pMEXkIwQeQuc.jar
    Feb 27, 2022 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4839304102159444719.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-zoRXJ5pIpO9oUfNZAqUeEgiNxUmkw0CeN_jlkenOoVY.jar
    Feb 27, 2022 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 1 files newly uploaded in 1 seconds
    Feb 27, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 27, 2022 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash f820fbb93adf903f348db219a8d2ef4bdf695791c7264e0dd089175084622d21> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline--CD7uTrfkD80jbIZqNLvS99pV5HHJk4N0IkXUIRiLSE.pb
    Feb 27, 2022 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 27, 2022 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 27, 2022 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 27, 2022 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 27, 2022 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-27_04_45_11-13268129213852647401?project=apache-beam-testing
    Feb 27, 2022 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-27_04_45_11-13268129213852647401
    Feb 27, 2022 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-27_04_45_11-13268129213852647401
    Feb 27, 2022 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-27T12:45:13.137Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 27, 2022 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:45:27.427Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 27, 2022 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:45:28.165Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 27, 2022 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:45:28.215Z: Expanding GroupByKey operations into optimizable parts.
    Feb 27, 2022 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:45:28.247Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 27, 2022 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:45:28.326Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 27, 2022 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:45:28.361Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 27, 2022 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:45:28.399Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 27, 2022 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:45:28.762Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 27, 2022 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:45:28.844Z: Starting 5 workers in us-central1-a...
    Feb 27, 2022 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:45:58.908Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 27, 2022 12:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:46:19.957Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 27, 2022 12:46:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:46:44.253Z: Workers have started successfully.
    Feb 27, 2022 12:46:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:46:44.286Z: Workers have started successfully.
    Feb 27, 2022 12:47:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:47:12.855Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 27, 2022 12:47:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:47:13.027Z: Cleaning up.
    Feb 27, 2022 12:47:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:47:13.123Z: Stopping worker pool...
    Feb 27, 2022 12:49:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:49:43.669Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 27, 2022 12:49:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T12:49:43.739Z: Worker pool stopped.
    Feb 27, 2022 12:49:49 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-27_04_45_11-13268129213852647401 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ba67c902-4cd0-40a9-9374-205c967dbb66 and timestamp: 2022-02-27T12:49:49.977000000Z:
                     Metric:                    Value:
                   read_time                     7.632
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 27, 2022 12:49:50 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 58.31 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 24s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/swhqpfdnfp72y

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3076

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3076/display/redirect>

Changes:


------------------------------------------
[...truncated 346.29 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is be2b2428c48266ebc3005438746ec79d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 27, 2022 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 27, 2022 6:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 27, 2022 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 27, 2022 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 27, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 27, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 27, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 27, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 27, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 27, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 27, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1030952115]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 27, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 27, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 27, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 27, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 27, 2022 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 27, 2022 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 27, 2022 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-x3bfOybJFOvSiYOWM7NAHEsEH3vzhI1pMEXkIwQeQuc.jar
    Feb 27, 2022 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1346023745844292591.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Z1yNhubA8mMU_1PQBP2T2gdvuZocMbUlL6EfXLdJuGw.jar
    Feb 27, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 1 files newly uploaded in 0 seconds
    Feb 27, 2022 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 27, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash 55b7d6aec76a991dc9ea84312946421ddb59134b0297dbf52b28b23e4a024d7b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-VbfWrsdqmR3J6oQxKUZCHdtZE0sCl9v1KyiyPkoCTXs.pb
    Feb 27, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 27, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 27, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 27, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 27, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-26_22_45_07-16312421223003612489?project=apache-beam-testing
    Feb 27, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-26_22_45_07-16312421223003612489
    Feb 27, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-26_22_45_07-16312421223003612489
    Feb 27, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-27T06:45:08.727Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 27, 2022 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:45:17.593Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 27, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:45:18.403Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 27, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:45:18.441Z: Expanding GroupByKey operations into optimizable parts.
    Feb 27, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:45:18.476Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 27, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:45:18.530Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 27, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:45:18.561Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 27, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:45:18.583Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 27, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:45:18.895Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 27, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:45:18.969Z: Starting 5 workers in us-central1-a...
    Feb 27, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:45:24.221Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 27, 2022 6:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:46:02.489Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 27, 2022 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:46:28.015Z: Workers have started successfully.
    Feb 27, 2022 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:46:28.049Z: Workers have started successfully.
    Feb 27, 2022 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:46:58.003Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 27, 2022 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:46:58.188Z: Cleaning up.
    Feb 27, 2022 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:46:58.260Z: Stopping worker pool...
    Feb 27, 2022 6:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:49:24.269Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 27, 2022 6:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T06:49:24.314Z: Worker pool stopped.
    Feb 27, 2022 6:49:31 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-26_22_45_07-16312421223003612489 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): fcc36d4c-4afa-425c-8a1a-fe3cfdcf2008 and timestamp: 2022-02-27T06:49:31.759000000Z:
                     Metric:                    Value:
                   read_time                     7.364
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 27, 2022 6:49:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 44.456 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/xpbpf7vveh4ni

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3075

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3075/display/redirect>

Changes:


------------------------------------------
[...truncated 349.92 KB...]
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 27, 2022 12:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 27, 2022 12:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 27, 2022 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 27, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 27, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 27, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 27, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 27, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 27, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 27, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 27, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1030952115]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 27, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 27, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 27, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 27, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 27, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 27, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 27, 2022 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 27, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 27, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-x3bfOybJFOvSiYOWM7NAHEsEH3vzhI1pMEXkIwQeQuc.jar
    Feb 27, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4968428552581706797.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-zqpAzfw-R7QVSF3g7srzBJW8Zs2g3etLlkrh1od7ia0.jar
    Feb 27, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigtable-emulator/0.137.1/b1639aa134de1302e43d9e9c3843f6ff853c510f/google-cloud-bigtable-emulator-0.137.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigtable-emulator-0.137.1-V2SlLU4BjIGf5QoF7YL-A-QNjw49NdXrNM4QoX9MXSI.jar
    Feb 27, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/kafka/1.16.3/640e33ce12581a172ebf8e2825c51dde35d87441/kafka-1.16.3.jar to gs://temp-storage-for-perf-tests/loadtests/staging/kafka-1.16.3-LTLWd_JMVmeGrp73oxhr7tDyeaDIrczZfQVoNPw_iTU.jar
    Feb 27, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mongodb/mongo-java-driver/3.12.7/1f45c6a397feeb46da75425619333d1cc6f90f78/mongo-java-driver-3.12.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mongo-java-driver-3.12.7-D_zgBJWNb9mzmuetJ37a0X9XtpcfSGsXYpxe6eE8Tao.jar
    Feb 27, 2022 12:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 365 files cached, 4 files newly uploaded in 2 seconds
    Feb 27, 2022 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 27, 2022 12:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash 1c7d7387cb634c70de7842101f222b21c9b8dbbbe24f42a691995844d69a33ce> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-HH1zh8tjTHDeeEIQHyIrIcm427viT0KmkZlYRNaaM84.pb
    Feb 27, 2022 12:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 27, 2022 12:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 27, 2022 12:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 27, 2022 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 27, 2022 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-26_16_45_08-758407132659547939?project=apache-beam-testing
    Feb 27, 2022 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-26_16_45_08-758407132659547939
    Feb 27, 2022 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-26_16_45_08-758407132659547939
    Feb 27, 2022 12:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-27T00:45:09.189Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 27, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:45:20.365Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 27, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:45:20.977Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 27, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:45:21.017Z: Expanding GroupByKey operations into optimizable parts.
    Feb 27, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:45:21.042Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 27, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:45:21.110Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 27, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:45:21.142Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 27, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:45:21.171Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 27, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:45:21.528Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 27, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:45:21.617Z: Starting 5 workers in us-central1-a...
    Feb 27, 2022 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:45:44.892Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 27, 2022 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:45:55.958Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Feb 27, 2022 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:45:55.992Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Feb 27, 2022 12:46:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:46:06.284Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 27, 2022 12:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:46:30.836Z: Workers have started successfully.
    Feb 27, 2022 12:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:46:30.865Z: Workers have started successfully.
    Feb 27, 2022 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:46:57.660Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 27, 2022 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:46:57.818Z: Cleaning up.
    Feb 27, 2022 12:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:46:57.900Z: Stopping worker pool...
    Feb 27, 2022 12:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:49:24.492Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 27, 2022 12:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-27T00:49:24.536Z: Worker pool stopped.
    Feb 27, 2022 12:49:31 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-26_16_45_08-758407132659547939 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f26ce477-3844-4422-bc6c-a66d5f6f6dfe and timestamp: 2022-02-27T00:49:31.058000000Z:
                     Metric:                    Value:
                   read_time                     8.114
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 27, 2022 12:49:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 44.105 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/rbbzge3teh3ng

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3074

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3074/display/redirect>

Changes:


------------------------------------------
[...truncated 349.91 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 26, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 26, 2022 6:45:10 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 26, 2022 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 26, 2022 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 26, 2022 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 6:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 26, 2022 6:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 26, 2022 6:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 6:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 26, 2022 6:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 26, 2022 6:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 26, 2022 6:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 6:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 26, 2022 6:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 26, 2022 6:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 6:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 26, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1030952115]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 26, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 26, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 26, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 26, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 26, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 26, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 26, 2022 6:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 26, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 26, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-x3bfOybJFOvSiYOWM7NAHEsEH3vzhI1pMEXkIwQeQuc.jar
    Feb 26, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2575477249487624385.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6_9PdFJafiggZHjC_L9_mwDdE28rURFZgm3V4EgwLDk.jar
    Feb 26, 2022 6:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 1 files newly uploaded in 0 seconds
    Feb 26, 2022 6:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 26, 2022 6:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash 875f847f49aa6a811d010aba63f7a5a7ff5e3dbe918c6c0bb47b5a93e616b367> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-h1-Ef0mqaoEdAQq6Y_elp_9ePb6RjGwLtHtak-YWs2c.pb
    Feb 26, 2022 6:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 26, 2022 6:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 26, 2022 6:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 26, 2022 6:45:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 26, 2022 6:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-26_10_45_28-3027515114885272009?project=apache-beam-testing
    Feb 26, 2022 6:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-26_10_45_28-3027515114885272009
    Feb 26, 2022 6:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-26_10_45_28-3027515114885272009
    Feb 26, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-26T18:45:30.082Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 26, 2022 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:45:38.804Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 26, 2022 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:45:39.602Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 26, 2022 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:45:39.641Z: Expanding GroupByKey operations into optimizable parts.
    Feb 26, 2022 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:45:39.668Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 26, 2022 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:45:39.745Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 26, 2022 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:45:39.778Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 26, 2022 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:45:39.813Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 26, 2022 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:45:40.181Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 26, 2022 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:45:40.253Z: Starting 5 workers in us-central1-a...
    Feb 26, 2022 6:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:45:56.225Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 26, 2022 6:46:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:46:15.507Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Feb 26, 2022 6:46:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:46:15.536Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Feb 26, 2022 6:46:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:46:25.833Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 26, 2022 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:46:50.231Z: Workers have started successfully.
    Feb 26, 2022 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:46:50.251Z: Workers have started successfully.
    Feb 26, 2022 6:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:47:20.498Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 26, 2022 6:47:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:47:20.637Z: Cleaning up.
    Feb 26, 2022 6:47:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:47:20.710Z: Stopping worker pool...
    Feb 26, 2022 6:49:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:49:45.651Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 26, 2022 6:49:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T18:49:45.695Z: Worker pool stopped.
    Feb 26, 2022 6:49:52 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-26_10_45_28-3027515114885272009 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a7376b10-846a-45dc-a3b1-872289704495 and timestamp: 2022-02-26T18:49:52.900000000Z:
                     Metric:                    Value:
                   read_time                     7.667
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 26, 2022 6:49:52 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 46.764 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 34s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/g4qnbxwjgyuwi

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3073

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3073/display/redirect>

Changes:


------------------------------------------
[...truncated 374.68 KB...]
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.parquet/parquet-encoding/1.12.0/257c0c34ee68b52791e28c92447a5e065568c96f/parquet-encoding-1.12.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/parquet-encoding-1.12.0-QeYG1gK6Uy16hAjf1VrPtEVlNFO-8708TjX1lZw2nM8.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.parquet/parquet-format-structures/1.12.0/eb73b9cb2be4c142418f01c190c38c8db38ddb90/parquet-format-structures-1.12.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/parquet-format-structures-1.12.0-bgfWiNMvNb4Z7PRNlLMDIt6frDMrIbU43KJaqmLk4_o.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.4.14/c114c1e1c8172a7cd3f6ae39209a635f7a06c1a1/zookeeper-3.4.14.jar to gs://temp-storage-for-perf-tests/loadtests/staging/zookeeper-3.4.14-I-8r-QyMojP2i_PCSraZR_x7OOjIuTJ_XC_ZFArnrs8.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.process/2.1.2/986b38302fa10018d5baccee7f655c31ee9afe5b/de.flapdoodle.embed.process-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.process-2.1.2-OasY7D5KRAimcZcWcjFwgi8Qb4B-iff1FfrVeNSih6A.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.directory.server/apacheds-kerberos-codec/2.0.0-M15/1c16e4e477183641c5f0dd5cdecd27ec331bacb5/apacheds-kerberos-codec-2.0.0-M15.jar to gs://temp-storage-for-perf-tests/loadtests/staging/apacheds-kerberos-codec-2.0.0-M15-SZb1tySX6U3YbWSjcBWMT7AEnuqbF_-LJ6RnHWwTbe0.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.directory.server/apacheds-i18n/2.0.0-M15/71c61c84683152ec2a6a65f3f96fe534e304fa22/apacheds-i18n-2.0.0-M15.jar to gs://temp-storage-for-perf-tests/loadtests/staging/apacheds-i18n-2.0.0-M15-vTt87Of8Y2TLzjK57dDpYoo-iJxqk83v8bXiEx4qAHw.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar to gs://temp-storage-for-perf-tests/loadtests/staging/slf4j-log4j12-1.7.30-TUHgHEDK-KbHSt0rBzBV2KTOHDDlgVQXexPxLXirvns.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.directory.api/api-util/1.0.0-M20/a871abf060b3cf83fc6dc4d7e3d151fce50ac3cb/api-util-1.0.0-M20.jar to gs://temp-storage-for-perf-tests/loadtests/staging/api-util-1.0.0-M20-_TL9BHzPFDxY0JO1iBGqgeU5-M-DwRh4CfGiQaHfEtE.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.directory.api/api-asn1-api/1.0.0-M20/5e6486ffa3125ba44dc410ead166e1d6ba8ac76d/api-asn1-api-1.0.0-M20.jar to gs://temp-storage-for-perf-tests/loadtests/staging/api-asn1-api-1.0.0-M20-SEqvS4iLDraZ2VvqJlwtW26-yVHXDlxfdpHNUt1Mgpg.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.ehcache/ehcache/3.3.1/8de32b5df06074d4d9ec48a0df9a9df290e2e5dd/ehcache-3.3.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/ehcache-3.3.1-uzutJRm0_eRWyjx_BqRCNdo0dpn89MspvJ9qDLCWBPY.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.zaxxer/HikariCP-java7/2.4.12/c5f7070c3ce390bd0b4c842fc75f4b698152dee2/HikariCP-java7-2.4.12.jar to gs://temp-storage-for-perf-tests/loadtests/staging/HikariCP-java7-2.4.12-uxHtPh-bbormGnHE8ssIucJ3h794-xqf8zUhdYeFbrc.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-csv/1.8/37ca9a9aa2d4be2599e55506a6d3170dd7a3df4/commons-csv-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-csv-1.8-qL1WZS7UZo2dWjOZSuUvWbnjnI6w68tmhOaK7udXmmE.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.alibaba/fastjson/1.2.69/6cb063f1d527ff65bdbb9ea74888a5ffc3f92197/fastjson-1.2.69.jar to gs://temp-storage-for-perf-tests/loadtests/staging/fastjson-1.2.69-KniRdEoDeOAJmpuB3b7U0uGn6GMvb4b4_8-jg-UeScg.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/janino/3.0.11/e699e368095379ba0402ea1780a87fcaea16e68f/janino-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/janino-3.0.11-kje3HSMpGA5ZIQ6aqhAO4xNFTvCuWIYIx1yxkxlZG-E.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.spotbugs/spotbugs-annotations/3.1.9/2ef5127efcc1a899aab8c66d449a631c9a99c469/spotbugs-annotations-3.1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/spotbugs-annotations-3.1.9-aMfEa0KZ6Ug34jaudC85mQGpUP6RD-PKcQAmdTtd0uE.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/commons-compiler/3.0.11/f2a6ec7fbc929c9fc87ff8bb486c0574951c5b09/commons-compiler-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-compiler-3.0.11-DxpPXyZccBoxkzJErnBF_O8YtPpZUEF-Je5wvlDd2s8.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.codahale.metrics/metrics-core/3.0.1/1e98427c7f6e53363b598e2943e50903ce4f3657/metrics-core-3.0.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/metrics-core-3.0.1-sORikCIn9yYdDRpjtquyitz3EUQFjiQ8wJJ9NgofrqE.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.jersey/jersey-json/1.9/1aa73e1896bcc7013fed247157d7f676226eb432/jersey-json-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jersey-json-1.9-zF1TX0PO8NHEZyQJYarjWAGoN6sBAxnnQbLHpmWPP9Y.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.jackson/jackson-xc/1.9.13/e3480072bc95c202476ffa1de99ff7ee9149f29c/jackson-xc-1.9.13.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jackson-xc-1.9.13-LSkF_Ox9HFW3dZlWF2hduwNnI1BwTZ5AtJLqtbVNC-c.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.jackson/jackson-jaxrs/1.9.13/534d72d2b9d6199dd531dfb27083dd4844082bba/jackson-jaxrs-1.9.13.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jackson-jaxrs-1.9.13-F3BXCmulyHpHlcCutA7nxf5eMd9k7x1HlaDUJ3lrhLs.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-configuration/commons-configuration/1.6/32cadde23955d7681b0d94a2715846d20b425235/commons-configuration-1.6.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-configuration-1.6-RrcbllYVT2oW6ksdyEAmtSqTBfjv8EaitGVfoXOOXu4.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-beanutils/commons-beanutils/1.9.4/d52b9abcd97f38c81342bb7e7ae1eee9b73cba51/commons-beanutils-1.9.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-beanutils-1.9.4-fZOMgXiQKARcCMBl6UvnX8KAUnYg1b1itRnVg4UyNoo.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-digester/commons-digester/1.8/dc6a73fdbd1fa3f0944e8497c6c872fa21dca37e/commons-digester-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-digester-1.8-BWYjcwRPPf8RJWe3u136EXTpHgdMDHJ7RBJ4gBP0nVY.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.yetus/audience-annotations/0.12.0/e0efa60318229590103e31c69ebdaae56d903644/audience-annotations-0.12.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/audience-annotations-0.12.0-_7EB_AZjYP88d0V8kn_Xln-wlqbungRqtwcUR9ggjvw.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-annotations/2.10.1/573603581e99e21206d251a3392ef750e9bc18ab/hadoop-annotations-2.10.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/hadoop-annotations-2.10.1-OFIe3ZPg29De8bZXcD3x9HhwgIudOezUAtSqPFrPFkI.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-pool/commons-pool/1.6/4572d589699f09d866a226a14b7f4323c6d8f040/commons-pool-1.6.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-pool-1.6-RsQrSjjcay21Op7lySxj2xA2ZdVmlOLPziyV1RpoYMw.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mortbay.jetty/jetty-sslengine/6.1.26/60367999cee49a3b09fa86bdcb52310b6c896014/jetty-sslengine-6.1.26.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jetty-sslengine-6.1.26-nF9rsWi6AbldJQtX8GHICU4c6cia5OdzSSussXGS6oc.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.2/8ad72fe39fa8c91eaaf12aadb21e0c3661fe26d5/commons-collections-3.2.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-collections-3.2.2-7urpF5FxRKaKdB1MDf9mqlxcX9hVk_8he87T_Iyng7g.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-net/commons-net/3.1/2298164a7c2484406f2aa5ac85b205d39019896f/commons-net-3.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-net-3.1-NKWNbYClB0gwfmdOwntEEeZTb9EueL7EKOsu5JoSMAc.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.jersey/jersey-client/1.9/d3c4b2b5f89db32c96ceddcb863684821910a7bb/jersey-client-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jersey-client-1.9-iuA68NBsRqUbZdEj7EDyRdppCZGqNmnO9HZ9uPNvvmg.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.jersey/jersey-server/1.9/3a6ea7cc5e15c824953f9f3ece2201b634d90d18/jersey-server-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jersey-server-1.9-Pe2RsZgHdWG9UfbARCyc1wt1TYsxthr69Ei9qdAYSPA.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.servlet/servlet-api/2.5/5959582d97d8b61f4d154ca9e495aafd16726e34/servlet-api-2.5.jar to gs://temp-storage-for-perf-tests/loadtests/staging/servlet-api-2.5-xljqNgpw-u6ttm-zyQpwLkFCoKt3aPmumChnjg2a1Nw.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.jersey.contribs/jersey-guice/1.9/5963c28c47df7e5d6ad34cec80c071c368777f7b/jersey-guice-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jersey-guice-1.9-VE_JLSYlMyqajuqnpydM8a9nA5NqUK-oDZKnggCn3jQ.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.jersey/jersey-core/1.9/8341846f18187013bb9e27e46b7ee00a6395daf4/jersey-core-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jersey-core-1.9-LG0OyI_Iw2y0FjfZwA0GmMIstrahN_pSbveC4A0iZbw.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.servlet.jsp/jsp-api/2.1/63f943103f250ef1f3a4d5e94d145a0f961f5316/jsp-api-2.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jsp-api-2.1-VF9OfcZ4_7TPi9D9QLSkRwpAmnh8DqfQrS8I1WESmHs.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/xmlenc/xmlenc/0.52/d82554efbe65906d83b3d97bd7509289e9db561a/xmlenc-0.52.jar to gs://temp-storage-for-perf-tests/loadtests/staging/xmlenc-0.52-KCrhhfwv8n2ncUr5liiXwJz--vuIByIZxKL5xzYWwCY.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-cli/commons-cli/1.2/2bf96b7aa8b611c177d329452af1dc933e14501c/commons-cli-1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-cli-1.2-582JUZVtNJtWi3zP1PWyUpqMET5nwysCj1L_2jcSWdk.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/log4j/log4j/1.2.17/5af35056b4d257e4b64b9e8069c0746e8b08629f/log4j-1.2.17.jar to gs://temp-storage-for-perf-tests/loadtests/staging/log4j-1.2.17-HTFpZEVpdyBScJF1Q2kIKmZRvUl4G2AF3rlOVnU0Bvk.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.parquet/parquet-jackson/1.12.0/cf19a7197bda364334325a8a1cde5bcf5889773d/parquet-jackson-1.12.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/parquet-jackson-1.12.0-5L8vHgq0ebpIlxhdwDV_0qW9_zAN2aUQ74t7ayZ0tzg.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mortbay.jetty/jetty/6.1.26/2f546e289fddd5b1fab1d4199fbb6e9ef43ee4b0/jetty-6.1.26.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jetty-6.1.26-IQkdOpwTSfZA_cQhUEpgTAQO2JCH7MEq--MjUzJu1OU.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.xml.bind/jaxb-api/2.2.2/aeb3021ca93dde265796d82015beecdcff95bf09/jaxb-api-2.2.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jaxb-api-2.2.2-MCM99iFfuYLYeE3pHTB1lnSM6pjW1QIpPHw-hcFpcTc.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.squareup.okhttp/okhttp/2.7.5/7a15a7db50f86c4b64aa3367424a60e3a325b8f1/okhttp-2.7.5.jar to gs://temp-storage-for-perf-tests/loadtests/staging/okhttp-2.7.5-iKyf0btR-CvMZkzB65wiXJDcQ4nWYCMbTMc3vr_n0Ko.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.woodstox/stax2-api/4.2.1/a3f7325c52240418c2ba257b103c3c550e140c83/stax2-api-4.2.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/stax2-api-4.2.1-Z4Vn5ItRpCxlxpnyZlOa09Z21LGlsK19iezoudV3JXk.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty/3.10.6.Final/18ed04a0e502896552854926e908509db2987a00/netty-3.10.6.Final.jar to gs://temp-storage-for-perf-tests/loadtests/staging/netty-3.10.6.Final-h2ilD749k6iNjmAA6l1o4w9Q3JFbN2TDxYcPcMT7O0k.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs-client/2.10.1/b96dfefae8ddffe72b051a639cb11f4fe40f06ae/hadoop-hdfs-client-2.10.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/hadoop-hdfs-client-2.10.1-PY6m6joD2T6vr-wkECaRkhrDk3J5PT6sa_ybU6-bjkQ.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna-platform/4.0.0/deb6bf66918989b50209b8c9aaf3b2561af7f011/jna-platform-4.0.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-platform-4.0.0-B21i7Yfna9yzdQ_-gKpJXuIRK9chJmOVTWVH9IJEkuk.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.minidev/json-smart/2.3/7396407491352ce4fa30de92efb158adb76b5b/json-smart-2.3.jar to gs://temp-storage-for-perf-tests/loadtests/staging/json-smart-2.3-kD9IyKpMP2QmRAuNMt6J-h3COxFpq94l5OHQaKpncIs.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.htrace/htrace-core4/4.1.0-incubating/12b3e2adda95e8c41d9d45d33db075137871d2e2/htrace-core4-4.1.0-incubating.jar to gs://temp-storage-for-perf-tests/loadtests/staging/htrace-core4-4.1.0-incubating-XUW3kEhXw-StNrO8xXvi0sXzCMabX2pYvYaqfUiiXvY.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.minidev/accessors-smart/1.2/c592b500269bfde36096641b01238a8350f8aa31/accessors-smart-1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/accessors-smart-1.2-DHwmXWL8AHEk3DK5EzbpxCcmUdYpvF-hpOTjvHWOsuQ.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mortbay.jetty/jetty-util/6.1.26/e5642fe0399814e1687d55a3862aa5a3417226a9/jetty-util-6.1.26.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jetty-util-6.1.26-m5dM4rmfSCVLdhJjN9xFshIm84Oq7WFvWXgK2vFnwEc.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.jcraft/jsch/0.1.55/bbd40e5aa7aa3cfad5db34965456cee738a42a50/jsch-0.1.55.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jsch-0.1.55-1JKxWm0uo_HMOcQiyVPEDBIokHPb6DYNmMD2-ex0_EQ.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.nimbusds/nimbus-jose-jwt/7.9/b608cd5e306d67bb58fe5bd687387aa0671687a6/nimbus-jose-jwt-7.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/nimbus-jose-jwt-7.9-tPWEU-GAqYHrdEoZtNVq-xLxDD3TXnUxzFjubL9b8oY.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.ow2.asm/asm/5.0.4/da08b8cce7bbf903602a25a3a163ae252435795/asm-5.0.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/asm-5.0.4-iWYY7YrmJwJSGni8e-QrfEkaCOaSChX4mj7N7DHpoiA.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.inject/guice/3.0/9d84f15fe35e2c716a02979fb62f50a29f38aefa/guice-3.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/guice-3.0-GlnQQh_9NVzAtwtC3xwumvdEyKLQyS2jefX8ovB_HSI.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.jettison/jettison/1.1/1a01a2a1218fcf9faa2cc2a6ced025bdea687262/jettison-1.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jettison-1.1-N3lAKIsGQ8SHgBN_b2hXiTfh6lyitzgwqCDFCnt-2AE.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.xml.bind/jaxb-impl/2.3.3/3758e8c1664979749e647a9ca8c7ea1cd83c9b1e/jaxb-impl-2.3.3.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jaxb-impl-2.3.3-5ReNDHlIJH91oTxom_NvTV1JEKEh9xKqOyCulDdwadg.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/jline/jline/0.9.94/99a18e9a44834afdebc467294e1138364c207402/jline-0.9.94.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jline-0.9.94-2N8P-xLYfKh2JxzaTVmz_rlBI4gsG-F2O3-vLgoLDLs.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.squareup.okio/okio/1.6.0/98476622f10715998eacf9240d6b479f12c66143/okio-1.6.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/okio-1.6.0-EUvcH0czimi8vJWr8vXNxyvu7JGBLy_Ne1IcGTeHYmY.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.jamesmurty.utils/java-xmlbuilder/0.4/ac5962e48cdee3a0a6e1f8e00fcb594747ac5aaf/java-xmlbuilder-0.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/java-xmlbuilder-0.4-aB5TxP_Vn6EgaIA7JZ46g9Q_B6R8ES50ihh97heesx8.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/asm/asm/3.1/c157def142714c544bdea2e6144645702adf7097/asm-3.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/asm-3.1-Mz_1NpBDl1t-AxuLJyBpN0QYVHOOA4wfR_mNByogQ3o.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.sonatype.sisu.inject/cglib/2.2.1-v20090111/7ce5e983fd0e6c78346f4c9cbfa39d83049dda2/cglib-2.2.1-v20090111.jar to gs://temp-storage-for-perf-tests/loadtests/staging/cglib-2.2.1-v20090111-QuHfsmvsvxpjPyW0fjn8xCK4XnfkwEaNmkT4hfX6C-I.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.activation/activation/1.1/e6cb541461c2834bdea3eb920f1884d1eb508b50/activation-1.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/activation-1.1-KIHHnJ1u8BxY5ivuoT6dGsi4uqFvL8GYrW5ndt79zdM.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.xml.stream/stax-api/1.0-2/d6337b0de8b25e53e81b922352fbea9f9f57ba0b/stax-api-1.0-2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/stax-api-1.0-2-6McOvXb5gslYKoLvgs9s4Up9WKSk3KXLe3_JiMgAibc.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.fusesource.leveldbjni/leveldbjni-all/1.8/707350a2eeb1fa2ed77a32ddb3893ed308e941db/leveldbjni-all-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/leveldbjni-all-1.8-wpchOw5vk5IwWVJ1PzCZpMAucLNlYmb-AYZ-e2wWD_4.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.activation/jakarta.activation/1.2.2/74548703f9851017ce2f556066659438019e7eb5/jakarta.activation-1.2.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jakarta.activation-1.2.2-AhVnc-SunQSNFKVq011kS-6fEFKnkdBy3z3tPGVubho.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.geronimo.specs/geronimo-jcache_1.0_spec/1.0-alpha-1/ef92fbbc3a3a7f45bf021bcb75df2c6e0660dfac/geronimo-jcache_1.0_spec-1.0-alpha-1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/geronimo-jcache_1.0_spec-1.0-alpha-1-AHChLlj0kblXGTkTJSmaYpRTDubDziXlC9yYsLcAlmw.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.microsoft.sqlserver/mssql-jdbc/6.2.1.jre7/2912ca3a5ee674ec79cd6914b9f5d6282d083eb8/mssql-jdbc-6.2.1.jre7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mssql-jdbc-6.2.1.jre7-nPollFCuNHHS5uLD2K78ziNuPa74s3NNIdyTw6W76AY.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.inject/javax.inject/1/6975da39a7040257bd51d21a231b76c915872d38/javax.inject-1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/javax.inject-1-kcdwRKUMSBY2wy2Rb9ickRinIZU5BFLIEGUID5V95_8.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/aopalliance/aopalliance/1.0/235ba8b489512805ac13a8f9ea77a1ca5ebe3e8/aopalliance-1.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/aopalliance-1.0-Ct3sZw_tzT8RPFyAkdeDKA0j9146y4QbYanNsHk3agg.jar
    Feb 26, 2022 12:47:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.stephenc.jcip/jcip-annotations/1.0-1/ef31541dd28ae2cefdd17c7ebf352d93e9058c63/jcip-annotations-1.0-1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jcip-annotations-1.0-1-T8z_g4Kq_FiZYsTtsmL2qlleNPHhHmEFfRxqluj8cyM.jar
    Feb 26, 2022 12:47:22 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 275 files cached, 94 files newly uploaded in 1 seconds
    Feb 26, 2022 12:47:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 26, 2022 12:47:22 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash 39bba5e47a6d840c83040dc3cbbccdfa2a50aac52e57ae606274e561dcf6eab3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Obul5HpthAyDBA3Dy7zN-ipQqsUuV65gYnTlYdz26rM.pb
    Feb 26, 2022 12:47:25 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 26, 2022 12:47:25 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 26, 2022 12:47:25 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 26, 2022 12:47:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 26, 2022 12:47:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-26_04_47_25-6298650718173320486?project=apache-beam-testing
    Feb 26, 2022 12:47:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-26_04_47_25-6298650718173320486
    Feb 26, 2022 12:47:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-26_04_47_25-6298650718173320486
    Feb 26, 2022 12:47:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-26T12:47:27.545Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 26, 2022 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:47:36.110Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 26, 2022 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:47:37.010Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 26, 2022 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:47:37.050Z: Expanding GroupByKey operations into optimizable parts.
    Feb 26, 2022 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:47:37.080Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 26, 2022 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:47:37.176Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 26, 2022 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:47:37.211Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 26, 2022 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:47:37.246Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 26, 2022 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:47:37.603Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 26, 2022 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:47:37.674Z: Starting 5 workers in us-central1-f...
    Feb 26, 2022 12:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:47:40.735Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 26, 2022 12:48:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:48:24.167Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 26, 2022 12:48:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:48:50.234Z: Workers have started successfully.
    Feb 26, 2022 12:48:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:48:50.267Z: Workers have started successfully.
    Feb 26, 2022 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:49:25.439Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 26, 2022 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:49:25.634Z: Cleaning up.
    Feb 26, 2022 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:49:25.722Z: Stopping worker pool...
    Feb 26, 2022 12:51:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:51:58.210Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 26, 2022 12:51:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T12:51:58.259Z: Worker pool stopped.
    Feb 26, 2022 12:52:04 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-26_04_47_25-6298650718173320486 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 43613b3f-9af9-4dbf-abbe-0f7c1c7ac296 and timestamp: 2022-02-26T12:52:04.196000000Z:
                     Metric:                    Value:
                   read_time                     8.524
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 26, 2022 12:52:04 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.052 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 5 mins 0.416 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 43s
165 actionable tasks: 104 executed, 59 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/daokkw4dq7ja6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3072

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3072/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13015] Use a DirectExecutor for state since we are just completing


------------------------------------------
[...truncated 348.92 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is be2b2428c48266ebc3005438746ec79d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 26, 2022 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 26, 2022 6:45:13 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 26, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 26, 2022 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 26, 2022 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 26, 2022 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 26, 2022 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 26, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 26, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 26, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 26, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 26, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 26, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1030952115]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 26, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 26, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 26, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 26, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 26, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 26, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 26, 2022 6:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 26, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 26, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-x3bfOybJFOvSiYOWM7NAHEsEH3vzhI1pMEXkIwQeQuc.jar
    Feb 26, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4941822633102926848.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-DoAvaSvE1qKwjrPjGKbvH0yGwWTj-qbauc5KdKxaQG0.jar
    Feb 26, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 1 files newly uploaded in 0 seconds
    Feb 26, 2022 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 26, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash 425ab72e54689eb2814e98e0cbcb3d169a75859d4bdf608b8cadb291eb3419d0> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Qlq3LlRonrKBTpjgy8s9Fpp1hZ1L32CLjK2ykes0GdA.pb
    Feb 26, 2022 6:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 26, 2022 6:45:29 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 26, 2022 6:45:29 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 26, 2022 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 26, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-25_22_45_29-10260604935855208755?project=apache-beam-testing
    Feb 26, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-25_22_45_29-10260604935855208755
    Feb 26, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-25_22_45_29-10260604935855208755
    Feb 26, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-26T06:45:31.604Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 26, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:45:40.999Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 26, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:45:42.060Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 26, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:45:42.101Z: Expanding GroupByKey operations into optimizable parts.
    Feb 26, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:45:42.129Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 26, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:45:42.204Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 26, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:45:42.233Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 26, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:45:42.266Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 26, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:45:42.589Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 26, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:45:42.660Z: Starting 5 workers in us-central1-a...
    Feb 26, 2022 6:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:46:02.460Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 26, 2022 6:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:46:27.148Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 26, 2022 6:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:46:51.486Z: Workers have started successfully.
    Feb 26, 2022 6:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:46:51.513Z: Workers have started successfully.
    Feb 26, 2022 6:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:47:23.003Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 26, 2022 6:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:47:23.134Z: Cleaning up.
    Feb 26, 2022 6:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:47:23.204Z: Stopping worker pool...
    Feb 26, 2022 6:49:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:49:45.805Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 26, 2022 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T06:49:45.852Z: Worker pool stopped.
    Feb 26, 2022 6:49:53 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-25_22_45_29-10260604935855208755 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f51c9989-3c78-41c2-9dd6-24183611915f and timestamp: 2022-02-26T06:49:53.323000000Z:
                     Metric:                    Value:
                   read_time                     7.557
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 26, 2022 6:49:53 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 43.938 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 33s
165 actionable tasks: 104 executed, 59 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/5badngoelri6e

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3071

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3071/display/redirect?page=changes>

Changes:

[Ankur Goenka] [BEAM-13952] Sickbaying

[noreply] [BEAM-13912] Add more coverage for dataflow.go (#16903)

[noreply] [BEAM-12563] swaplevel general function for dataframe and series

[noreply] [BEAM-14001] Update coder.go unit tests (#16952)

[noreply] [BEAM-13910] Improve coverage of gcsx package (#16942)


------------------------------------------
[...truncated 367.08 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 26, 2022 1:02:58 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 26, 2022 1:03:01 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 26, 2022 1:03:04 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 26, 2022 1:03:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 26, 2022 1:03:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 1:03:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 26, 2022 1:03:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 26, 2022 1:03:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 1:03:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 26, 2022 1:03:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@680584313]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 26, 2022 1:03:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 26, 2022 1:03:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 1:03:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 26, 2022 1:03:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 26, 2022 1:03:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 1:03:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 26, 2022 1:03:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2081421216]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 26, 2022 1:03:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 26, 2022 1:03:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 1:03:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 26, 2022 1:03:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 26, 2022 1:03:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 26, 2022 1:03:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 26, 2022 1:03:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 26, 2022 1:03:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 26, 2022 1:03:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 26, 2022 1:03:51 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 26, 2022 1:03:51 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-rKRCHRzuJXnvZkzciH08XwByxLI1aTIFI2UDVTQg29E.jar
    Feb 26, 2022 1:03:53 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8562687151840305008.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6_5HME6Hxvn2QGnrlStK-UGPfeSm2OvgIC-JLtWT1b4.jar
    Feb 26, 2022 1:03:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 1 files newly uploaded in 8 seconds
    Feb 26, 2022 1:03:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 26, 2022 1:03:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash e4227a52e56f473240675ac3776c4728a184209e4d93913ae961d240b9e5b81a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-5CJ6UuVvRzJAZ1rDd2xHKKGEIJ5Nk5E66WHSQLnluBo.pb
    Feb 26, 2022 1:04:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 26, 2022 1:04:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 26, 2022 1:04:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 26, 2022 1:04:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 26, 2022 1:04:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-25_17_04_13-10461526808856065747?project=apache-beam-testing
    Feb 26, 2022 1:04:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-25_17_04_13-10461526808856065747
    Feb 26, 2022 1:04:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-25_17_04_13-10461526808856065747
    Feb 26, 2022 1:04:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-26T01:04:16.631Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 26, 2022 1:04:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:04:26.476Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 26, 2022 1:04:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:04:27.214Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 26, 2022 1:04:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:04:27.252Z: Expanding GroupByKey operations into optimizable parts.
    Feb 26, 2022 1:04:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:04:27.289Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 26, 2022 1:04:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:04:27.352Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 26, 2022 1:04:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:04:27.375Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 26, 2022 1:04:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:04:27.397Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 26, 2022 1:04:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:04:27.718Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 26, 2022 1:04:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:04:27.797Z: Starting 5 workers in us-central1-f...
    Feb 26, 2022 1:04:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:04:52.937Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 26, 2022 1:05:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:05:19.239Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 26, 2022 1:05:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:05:46.888Z: Workers have started successfully.
    Feb 26, 2022 1:05:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:05:46.913Z: Workers have started successfully.
    Feb 26, 2022 1:06:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:06:23.026Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 26, 2022 1:06:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:06:23.761Z: Cleaning up.
    Feb 26, 2022 1:06:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:06:23.979Z: Stopping worker pool...
    Feb 26, 2022 1:08:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:08:53.262Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 26, 2022 1:08:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-26T01:08:53.307Z: Worker pool stopped.
    Feb 26, 2022 1:09:01 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-25_17_04_13-10461526808856065747 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 25967b74-dbfe-462d-b836-7028c95bd017 and timestamp: 2022-02-26T01:09:01.074000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.783

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 26, 2022 1:09:01 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.058 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.086 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 6 mins 15.673 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 21m 28s
165 actionable tasks: 104 executed, 59 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/sc7ob66lgkwki

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3070

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3070/display/redirect?page=changes>

Changes:

[noreply] Memoize some objects for timer processing to reduce overhead. (#16207)

[noreply] [BEAM-13965] Use TypeDeserializer if type information is available to


------------------------------------------
[...truncated 362.87 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 6'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 6'
Successfully started process 'Gradle Test Executor 6'

Gradle Test Executor 6 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 25, 2022 6:48:22 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 25, 2022 6:48:24 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 25, 2022 6:48:25 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 25, 2022 6:48:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 6:48:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 6:48:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 25, 2022 6:48:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 6:48:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 6:48:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 25, 2022 6:48:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 25, 2022 6:48:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 25, 2022 6:48:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 6:48:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 25, 2022 6:48:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 25, 2022 6:48:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 6:48:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 25, 2022 6:48:33 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1030952115]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 25, 2022 6:48:33 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 6:48:33 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 6:48:33 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 25, 2022 6:48:33 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 6:48:33 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 6:48:33 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 25, 2022 6:48:33 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 25, 2022 6:48:33 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 25, 2022 6:48:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 25, 2022 6:48:44 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 25, 2022 6:48:44 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-rKRCHRzuJXnvZkzciH08XwByxLI1aTIFI2UDVTQg29E.jar
    Feb 25, 2022 6:48:44 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8218055081597565204.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Qe1ZaDrqqmNEAwHWyjimd49AzOuA5iMe5-21JGjAKR0.jar
    Feb 25, 2022 6:48:45 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 1 files newly uploaded in 0 seconds
    Feb 25, 2022 6:48:45 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 25, 2022 6:48:45 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash 6055105cf3cfc68abd2ff0a0f1ae86ee0d3146e6c25176f3f5dadf2e26edb1f8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-YFUQXPPPxoq9L_Cg8a6G7g0xRubCUXbz9drfLibtsfg.pb
    Feb 25, 2022 6:48:48 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 25, 2022 6:48:49 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 25, 2022 6:48:49 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 25, 2022 6:48:49 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 25, 2022 6:48:49 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-25_10_48_49-7257701559366666559?project=apache-beam-testing
    Feb 25, 2022 6:48:49 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-25_10_48_49-7257701559366666559
    Feb 25, 2022 6:48:49 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-25_10_48_49-7257701559366666559
    Feb 25, 2022 6:48:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-25T18:48:51.178Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 25, 2022 6:49:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:49:01.534Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 25, 2022 6:49:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:49:02.549Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 25, 2022 6:49:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:49:02.666Z: Expanding GroupByKey operations into optimizable parts.
    Feb 25, 2022 6:49:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:49:02.708Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 25, 2022 6:49:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:49:02.808Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 25, 2022 6:49:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:49:02.862Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 25, 2022 6:49:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:49:02.897Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 25, 2022 6:49:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:49:03.639Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 25, 2022 6:49:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:49:03.753Z: Starting 5 workers in us-central1-f...
    Feb 25, 2022 6:49:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:49:10.384Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 25, 2022 6:49:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:49:51.614Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 25, 2022 6:50:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:50:15.281Z: Workers have started successfully.
    Feb 25, 2022 6:50:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:50:15.309Z: Workers have started successfully.
    Feb 25, 2022 6:50:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:50:50.467Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 25, 2022 6:50:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:50:50.872Z: Cleaning up.
    Feb 25, 2022 6:50:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:50:51.029Z: Stopping worker pool...
    Feb 25, 2022 6:53:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:53:13.296Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 25, 2022 6:53:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T18:53:13.371Z: Worker pool stopped.
    Feb 25, 2022 6:53:20 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-25_10_48_49-7257701559366666559 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3e5890c0-e32e-4bee-bba5-c515671c01e0 and timestamp: 2022-02-25T18:53:20.566000000Z:
                     Metric:                    Value:
                   read_time                     8.378
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 25, 2022 6:53:20 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 6 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 3.555 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 57s
165 actionable tasks: 111 executed, 52 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/4ljezanx2lp6y

Stopped 5 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3069

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3069/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #16846 from [BEAM-12164]: Add sad path tests for


------------------------------------------
[...truncated 354.14 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 25, 2022 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 25, 2022 12:45:23 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 25, 2022 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 368 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 25, 2022 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 12:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 25, 2022 12:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 12:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 12:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 25, 2022 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@329929969]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 25, 2022 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 25, 2022 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 25, 2022 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 25, 2022 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 25, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1030952115]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 25, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 25, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 25, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 25, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 25, 2022 12:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 25, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 369 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 25, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-c8Lvs0liGychB9qFJmWQJSIpMpJsPlHym_I_tJH_wBU.jar
    Feb 25, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2027224909511741695.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-cCUV21phXw1YZgIT-GdAkKhHSHuSHNFfIKYnhFB4yw4.jar
    Feb 25, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 368 files cached, 1 files newly uploaded in 0 seconds
    Feb 25, 2022 12:45:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 25, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <144626 bytes, hash 1b54587d6e727d64b3fc982462fde6d9d5b72f9e9866380ad45e5a4026a83b30> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-G1RYfW5yfWSz_JgkYv3m2dW3L56YZjgK1F5aQCaoOzA.pb
    Feb 25, 2022 12:45:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 25, 2022 12:45:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 25, 2022 12:45:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 25, 2022 12:45:39 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 25, 2022 12:45:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-25_04_45_39-2120312521875112066?project=apache-beam-testing
    Feb 25, 2022 12:45:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-25_04_45_39-2120312521875112066
    Feb 25, 2022 12:45:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-25_04_45_39-2120312521875112066
    Feb 25, 2022 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-25T12:45:41.383Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 25, 2022 12:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:46:00.143Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 25, 2022 12:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:46:02.377Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 25, 2022 12:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:46:02.445Z: Expanding GroupByKey operations into optimizable parts.
    Feb 25, 2022 12:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:46:02.492Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 25, 2022 12:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:46:02.609Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 25, 2022 12:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:46:02.658Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 25, 2022 12:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:46:02.690Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 25, 2022 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:46:03.113Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 25, 2022 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:46:03.209Z: Starting 5 workers in us-central1-a...
    Feb 25, 2022 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:46:21.491Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 25, 2022 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:46:50.976Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 25, 2022 12:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:47:16.375Z: Workers have started successfully.
    Feb 25, 2022 12:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:47:16.407Z: Workers have started successfully.
    Feb 25, 2022 12:47:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:47:46.382Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 25, 2022 12:47:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:47:46.574Z: Cleaning up.
    Feb 25, 2022 12:47:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:47:46.656Z: Stopping worker pool...
    Feb 25, 2022 12:50:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:50:15.052Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 25, 2022 12:50:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T12:50:15.126Z: Worker pool stopped.
    Feb 25, 2022 12:50:22 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-25_04_45_39-2120312521875112066 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 944ebbd3-11f1-4bd4-8dec-70422db352b7 and timestamp: 2022-02-25T12:50:22.829000000Z:
                     Metric:                    Value:
                   read_time                     7.967
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 25, 2022 12:50:22 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 5 mins 5.499 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 3s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/utelqamtti3xc

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3068

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3068/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13925] Add most of the supporting files for the pr management


------------------------------------------
[...truncated 347.70 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ccea43b839a4c19410bdd8b0371a8c57
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 25, 2022 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 25, 2022 6:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 25, 2022 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 25, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 25, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 25, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@356279811]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 25, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 25, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 25, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 25, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 25, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@43951584]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 25, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 25, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 25, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 25, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 25, 2022 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 25, 2022 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 25, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-c8Lvs0liGychB9qFJmWQJSIpMpJsPlHym_I_tJH_wBU.jar
    Feb 25, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6079655215446605836.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-hMZ3kBb0XqwqgGrLcoC5iwjVbD4NSBSENWo2gBqr46Y.jar
    Feb 25, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 25, 2022 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 25, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143944 bytes, hash 4e73a8ba29dc541da756163696df5ab22689d117c16bcd4bafda0195a6d8982d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-TnOouincVB2nVhY2lt9asiaJ0RfBa81Lr9oBlabYmC0.pb
    Feb 25, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 25, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 25, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 25, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 25, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-24_22_45_08-3710875521290698175?project=apache-beam-testing
    Feb 25, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-24_22_45_08-3710875521290698175
    Feb 25, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-24_22_45_08-3710875521290698175
    Feb 25, 2022 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-25T06:45:10.751Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 25, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:45:21.663Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 25, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:45:22.318Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 25, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:45:22.364Z: Expanding GroupByKey operations into optimizable parts.
    Feb 25, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:45:22.399Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 25, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:45:22.502Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 25, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:45:22.532Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 25, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:45:22.553Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 25, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:45:22.888Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 25, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:45:22.975Z: Starting 5 workers in us-central1-f...
    Feb 25, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:45:30.535Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 25, 2022 6:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:46:10.006Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 25, 2022 6:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:46:37.520Z: Workers have started successfully.
    Feb 25, 2022 6:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:46:37.550Z: Workers have started successfully.
    Feb 25, 2022 6:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:47:14.161Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 25, 2022 6:47:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:47:14.329Z: Cleaning up.
    Feb 25, 2022 6:47:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:47:14.416Z: Stopping worker pool...
    Feb 25, 2022 6:49:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:49:40.694Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 25, 2022 6:49:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T06:49:40.787Z: Worker pool stopped.
    Feb 25, 2022 6:49:47 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-24_22_45_08-3710875521290698175 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): eb9f4369-4ca6-4d46-9c01-4a4bd8297205 and timestamp: 2022-02-25T06:49:47.316000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.668

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 25, 2022 6:49:47 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 59.535 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 27s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/p2jdesfogi75m

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3067

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3067/display/redirect?page=changes>

Changes:

[benjamin.gonzalez] Revert PR#16253 due errors with plugin flaky-test-handler

[noreply] [BEAM-13767] Migrate serveral portable runner tasks to use configuration

[noreply] [BEAM-13996] Removing 'No cluster_manager is associated with the

[noreply] [BEAM-13906] Improve coverage of errors package (#16934)

[noreply] [BEAM-13886] unit tests for trigger package (#16935)

[noreply] [BEAM-4767] Remove beam- prefix from release script tags (#16899)

[noreply] [BEAM-13866] Add small unit tests to errorx, make boolean assignment


------------------------------------------
[...truncated 352.80 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ccea43b839a4c19410bdd8b0371a8c57
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 25, 2022 12:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 25, 2022 12:45:15 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 25, 2022 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 25, 2022 12:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 12:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 12:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 25, 2022 12:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 12:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 12:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 25, 2022 12:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@356279811]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 25, 2022 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 25, 2022 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 25, 2022 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 25, 2022 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 25, 2022 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@43951584]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 25, 2022 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 25, 2022 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 25, 2022 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 25, 2022 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 25, 2022 12:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 25, 2022 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 25, 2022 12:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 25, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 25, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-c8Lvs0liGychB9qFJmWQJSIpMpJsPlHym_I_tJH_wBU.jar
    Feb 25, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test9151259602243825879.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-gN1hhL1GhCa6EAObRwj_BXAud-277zoR7xPwizBdRwo.jar
    Feb 25, 2022 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 1 seconds
    Feb 25, 2022 12:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 25, 2022 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 692187098f3a7b61b98cffac6ff912f46fc754074a7fa90a3655860450fd7186> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-aSGHCY86e2G5jP-sb_kS9G_HVAdKf6kKNlWGBFD9cYY.pb
    Feb 25, 2022 12:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 25, 2022 12:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 25, 2022 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 25, 2022 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 25, 2022 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-24_16_45_32-10135773099978582162?project=apache-beam-testing
    Feb 25, 2022 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-24_16_45_32-10135773099978582162
    Feb 25, 2022 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-24_16_45_32-10135773099978582162
    Feb 25, 2022 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-25T00:45:34.280Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 25, 2022 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:45:45.022Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 25, 2022 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:45:45.927Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 25, 2022 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:45:45.971Z: Expanding GroupByKey operations into optimizable parts.
    Feb 25, 2022 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:45:46.016Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 25, 2022 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:45:46.116Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 25, 2022 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:45:46.149Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 25, 2022 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:45:46.179Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 25, 2022 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:45:46.735Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 25, 2022 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:45:46.853Z: Starting 5 workers in us-central1-f...
    Feb 25, 2022 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:45:48.169Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 25, 2022 12:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:46:28.333Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 25, 2022 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:46:54.925Z: Workers have started successfully.
    Feb 25, 2022 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:46:54.959Z: Workers have started successfully.
    Feb 25, 2022 12:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:47:30.299Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 25, 2022 12:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:47:30.452Z: Cleaning up.
    Feb 25, 2022 12:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:47:30.527Z: Stopping worker pool...
    Feb 25, 2022 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:50:02.272Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 25, 2022 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-25T00:50:02.332Z: Worker pool stopped.
    Feb 25, 2022 12:50:08 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-24_16_45_32-10135773099978582162 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 21b6cff2-e41c-43df-98af-f8f4dfac2059 and timestamp: 2022-02-25T00:50:08.802000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      9.73

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 25, 2022 12:50:08 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 57.41 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 48s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/nhwkgoyuoliwk

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3066

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3066/display/redirect?page=changes>

Changes:

[noreply] Fix BoundedQueueExecutor and StreamingDataflowWorker to actually limit

[noreply] [BEAM-1857] Add Neo4jIO (#15916)


------------------------------------------
[...truncated 352.17 KB...]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 24, 2022 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 24, 2022 6:44:57 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 24, 2022 6:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 24, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 24, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 24, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 24, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 24, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@356279811]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 24, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 24, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 24, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 24, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 24, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@43951584]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 24, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 24, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 24, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 24, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 24, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 24, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 24, 2022 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 24, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 24, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-c8Lvs0liGychB9qFJmWQJSIpMpJsPlHym_I_tJH_wBU.jar
    Feb 24, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.38.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.38.0-SNAPSHOT-tests-WW_qh_qZ0OtxNK1HIZTckB7QnP0NV0saj8TCBpbuXbM.jar
    Feb 24, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.38.0-SNAPSHOT-vPEGqxI9MKGLtuCuQz0HzI-33fdG7YkDTKUwhwE408c.jar
    Feb 24, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3520092636260975303.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-B-mPgkpYvp9QP09jxZgNIEBZ5HdhuOGGpkVVbux-OGs.jar
    Feb 24, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.38.0-SNAPSHOT-Tl-rqROyfFjUbm1pi2LQ2tdsZFwawXrRlgBoBaQ0-f4.jar
    Feb 24, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.38.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.38.0-SNAPSHOT-tests-9waRkOC4eM0X7ABYrPq1tRErTw5OgxB_aYDH-VkPcDE.jar
    Feb 24, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/join-library/build/libs/beam-sdks-java-extensions-join-library-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-join-library-2.38.0-SNAPSHOT-xz-aSlhizJi8jigGo3amZ2g_MkglE3dBulicsi2UX5U.jar
    Feb 24, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/parquet/build/libs/beam-sdks-java-io-parquet-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-parquet-2.38.0-SNAPSHOT--Y3Ki6BsSIhDqZLGsTPV4L2LNYlAKfQgFkOYYPov-fs.jar
    Feb 24, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/hadoop-common/build/libs/beam-sdks-java-io-hadoop-common-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-hadoop-common-2.38.0-SNAPSHOT--Z7FaTJuLSyiPWeUL9VxAgqy53Gn2R4elBkiZEpHHyU.jar
    Feb 24, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.38.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Feb 24, 2022 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 358 files cached, 9 files newly uploaded in 1 seconds
    Feb 24, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 24, 2022 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 87f8269bb0cb21668c100b803e29348793a40a543ba451c0f8037f5dab25080e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-h_gmm7DLIWaMEAuAPik0h5OkClQ7pFHA-AN_XaslCA4.pb
    Feb 24, 2022 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 24, 2022 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 24, 2022 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 24, 2022 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 24, 2022 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-24_10_45_13-17251798086531171860?project=apache-beam-testing
    Feb 24, 2022 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-24_10_45_13-17251798086531171860
    Feb 24, 2022 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-24_10_45_13-17251798086531171860
    Feb 24, 2022 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-24T18:45:17.436Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 24, 2022 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:45:35.263Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 24, 2022 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:45:35.966Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 24, 2022 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:45:36.005Z: Expanding GroupByKey operations into optimizable parts.
    Feb 24, 2022 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:45:36.036Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 24, 2022 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:45:36.091Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 24, 2022 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:45:36.122Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 24, 2022 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:45:36.145Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 24, 2022 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:45:36.484Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 24, 2022 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:45:36.547Z: Starting 5 workers in us-central1-c...
    Feb 24, 2022 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:46:00.610Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 24, 2022 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:46:27.448Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 24, 2022 6:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:46:53.730Z: Workers have started successfully.
    Feb 24, 2022 6:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:46:53.761Z: Workers have started successfully.
    Feb 24, 2022 6:47:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:47:28.171Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 24, 2022 6:47:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:47:28.314Z: Cleaning up.
    Feb 24, 2022 6:47:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:47:28.393Z: Stopping worker pool...
    Feb 24, 2022 6:49:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:49:52.357Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 24, 2022 6:49:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T18:49:52.393Z: Worker pool stopped.
    Feb 24, 2022 6:49:58 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-24_10_45_13-17251798086531171860 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a801da2f-6cf8-46ba-bb63-ba7c0a90a9eb and timestamp: 2022-02-24T18:49:58.458000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.247

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 24, 2022 6:49:58 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 67 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.002 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 5 mins 4.771 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 38s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ovhorcloqzerc

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3065

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3065/display/redirect>

Changes:


------------------------------------------
Started by timer
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on apache-beam-jenkins-4 (beam) in workspace <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/>
The recommended git tool is: NONE
No credentials specified
Wiping out workspace first.
Cloning the remote Git repository
Cloning repository https://github.com/apache/beam.git
 > git init <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src> # timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git --version # timeout=10
 > git --version # 'git version 2.25.1'
 > git fetch --tags --force --progress -- https://github.com/apache/beam.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git fetch --tags --force --progress -- https://github.com/apache/beam.git +refs/heads/*:refs/remotes/origin/* +refs/pull/${ghprbPullId}/*:refs/remotes/origin/pr/${ghprbPullId}/* # timeout=10
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 2a45a5b6c34141dcc595c5cda3d28264e542bdff (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 2a45a5b6c34141dcc595c5cda3d28264e542bdff # timeout=10
Commit message: "Merge pull request #16826 from [BEAM-13870] [Playground] Increase test coverage for the cache package"
 > git rev-list --no-walk 2a45a5b6c34141dcc595c5cda3d28264e542bdff # timeout=10
No emails were triggered.
[EnvInject] - Executing scripts and injecting environment variables after the SCM step.
[EnvInject] - Injecting as environment variables the properties content 
SPARK_LOCAL_IP=127.0.0.1
SETUPTOOLS_USE_DISTUTILS=stdlib

[EnvInject] - Variables injected successfully.
[Gradle] - Launching build.
[src] $ <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/gradlew> --continue --max-workers=12 -Dorg.gradle.jvmargs=-Xms2g -Dorg.gradle.jvmargs=-Xmx4g -Dorg.gradle.vfs.watch=false -Pdocker-pull-licenses --info -DintegrationTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE"] -DintegrationTestRunner=dataflow :sdks:java:extensions:sql:perf-tests:integrationTest --tests org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
Initialized native services in: /home/jenkins/.gradle/native
Initialized jansi services in: /home/jenkins/.gradle/native
Removing 0 daemon stop events from registry

FAILURE: Build failed with an exception.

* What went wrong:
Timeout waiting to lock daemon addresses registry. It is currently in use by another Gradle instance.
Owner PID: unknown
Our PID: 2056376
Owner Operation: unknown
Our operation: 
Lock file: /home/jenkins/.gradle/daemon/7.3.2/registry.bin.lock

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3064

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3064/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #16857 from [BEAM-13662] [Playground] Support

[noreply] Merge pull request #16826 from [BEAM-13870] [Playground] Increase test


------------------------------------------
[...truncated 345.78 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 971'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 971'
Successfully started process 'Gradle Test Executor 971'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 24, 2022 6:44:38 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 24, 2022 6:44:39 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 24, 2022 6:44:39 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 24, 2022 6:44:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 24, 2022 6:44:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 6:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 24, 2022 6:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 24, 2022 6:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 24, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@356279811]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 24, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 24, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 24, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 24, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 24, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@474324008]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 24, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 24, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 24, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 24, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 24, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 24, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 24, 2022 6:44:46 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 24, 2022 6:44:50 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 24, 2022 6:44:50 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 24, 2022 6:44:50 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6704121557724585694.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-PMgX5RFSPRBhcMtoCcPFKBJ_VWiz24gPnW8EyKMCyPI.jar
    Feb 24, 2022 6:44:51 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 24, 2022 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 24, 2022 6:44:51 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash d11f257966fc31781039b712f842dc284c1d9f16ba221aafb7d15c830b904135> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-0R8leWb8MXgQObcS-ELcKEwdnxa6Ihqvt9FcgwuQQTU.pb
    Feb 24, 2022 6:44:54 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 24, 2022 6:44:54 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 24, 2022 6:44:54 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 24, 2022 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 24, 2022 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-23_22_44_54-9402402576319792309?project=apache-beam-testing
    Feb 24, 2022 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-23_22_44_54-9402402576319792309
    Feb 24, 2022 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-23_22_44_54-9402402576319792309
    Feb 24, 2022 6:44:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-24T06:44:57.020Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 24, 2022 6:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:45:10.399Z: Worker configuration: e2-standard-2 in us-central1-b.
    Feb 24, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:45:11.193Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 24, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:45:11.225Z: Expanding GroupByKey operations into optimizable parts.
    Feb 24, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:45:11.250Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 24, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:45:11.323Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 24, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:45:11.347Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 24, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:45:11.378Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 24, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:45:11.762Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 24, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:45:11.830Z: Starting 5 workers in us-central1-b...
    Feb 24, 2022 6:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:45:36.704Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 24, 2022 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:45:53.107Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Feb 24, 2022 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:45:53.140Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Feb 24, 2022 6:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:46:03.441Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 24, 2022 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:46:28.059Z: Workers have started successfully.
    Feb 24, 2022 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:46:28.096Z: Workers have started successfully.
    Feb 24, 2022 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:47:01.540Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 24, 2022 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:47:01.717Z: Cleaning up.
    Feb 24, 2022 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:47:01.795Z: Stopping worker pool...
    Feb 24, 2022 6:49:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:49:28.992Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 24, 2022 6:49:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T06:49:29.046Z: Worker pool stopped.
    Feb 24, 2022 6:49:34 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-23_22_44_54-9402402576319792309 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1e9f1607-a686-4c78-a3ad-0561a57a6ecc and timestamp: 2022-02-24T06:49:35.007000000Z:
                     Metric:                    Value:
                   read_time                     9.554
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 24, 2022 6:49:35 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 971 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.006 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.004 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 5 mins 0.185 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 16s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/qdyrsx6jhkj52

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3063

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3063/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13884] Improve mtime package (#16924)

[noreply] Minor: Update Go API doc links (#16932)

[noreply] [BEAM-13218] Re-enable


------------------------------------------
[...truncated 349.18 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e27bf5f90ec76a6d3eade9f59506d8a8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 35'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 35'
Successfully started process 'Gradle Test Executor 35'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 24, 2022 12:47:08 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 24, 2022 12:47:08 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 24, 2022 12:47:09 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 24, 2022 12:47:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 24, 2022 12:47:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 12:47:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 24, 2022 12:47:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 24, 2022 12:47:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 12:47:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 24, 2022 12:47:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@356279811]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 24, 2022 12:47:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 24, 2022 12:47:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 12:47:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 24, 2022 12:47:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 24, 2022 12:47:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 12:47:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 24, 2022 12:47:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@474324008]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 24, 2022 12:47:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 24, 2022 12:47:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 12:47:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 24, 2022 12:47:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 24, 2022 12:47:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 24, 2022 12:47:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 24, 2022 12:47:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 24, 2022 12:47:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 24, 2022 12:47:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 24, 2022 12:47:20 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 24, 2022 12:47:20 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 24, 2022 12:47:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4221323459307517364.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-mIDEmsB-unghKgUqWK_AYBbDCYeROkt_vYLhAGQG0vg.jar
    Feb 24, 2022 12:47:21 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 24, 2022 12:47:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 24, 2022 12:47:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 9829cce8a3b681855a8779662b29b6e715fbf7e15c2aa3cf1cbafcce98222e92> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-mCnM6KO2gYVah3lmKym25xX79-FcKqPPHLr8zpgiLpI.pb
    Feb 24, 2022 12:47:24 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 24, 2022 12:47:24 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 24, 2022 12:47:24 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 24, 2022 12:47:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 24, 2022 12:47:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-23_16_47_24-6509610708928032731?project=apache-beam-testing
    Feb 24, 2022 12:47:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-23_16_47_24-6509610708928032731
    Feb 24, 2022 12:47:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-23_16_47_24-6509610708928032731
    Feb 24, 2022 12:47:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-24T00:47:26.644Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 24, 2022 12:47:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:47:36.012Z: Worker configuration: e2-standard-2 in us-central1-b.
    Feb 24, 2022 12:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:47:36.800Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 24, 2022 12:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:47:36.843Z: Expanding GroupByKey operations into optimizable parts.
    Feb 24, 2022 12:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:47:36.881Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 24, 2022 12:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:47:36.951Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 24, 2022 12:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:47:36.982Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 24, 2022 12:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:47:37.006Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 24, 2022 12:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:47:37.354Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 24, 2022 12:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:47:37.421Z: Starting 5 workers in us-central1-b...
    Feb 24, 2022 12:47:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:47:52.570Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 24, 2022 12:48:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:48:23.600Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 24, 2022 12:49:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:49:02.433Z: Workers have started successfully.
    Feb 24, 2022 12:49:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:49:02.471Z: Workers have started successfully.
    Feb 24, 2022 12:49:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:49:43.379Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 24, 2022 12:49:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:49:43.564Z: Cleaning up.
    Feb 24, 2022 12:49:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:49:43.648Z: Stopping worker pool...
    Feb 24, 2022 12:52:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:52:10.525Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 24, 2022 12:52:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-24T00:52:10.573Z: Worker pool stopped.
    Feb 24, 2022 12:52:16 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-23_16_47_24-6509610708928032731 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ea39af44-a38b-4777-9748-e4aca2cc62a2 and timestamp: 2022-02-24T00:52:16.648000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    18.964

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 24, 2022 12:52:16 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 35 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.001 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.002 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 5 mins 12.569 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 53s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/tnckueds7knry

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3062

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3062/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12645] Fix code-cov flakes due to monorepo. (#16925)

[noreply] [BEAM-13969] Deprecate stringx package (#16884)

[noreply] Add Go badge to ReadMe (#16897)

[noreply] [BEAM-13980] Re-add method gone missing in af2f8ee6 (#16918)


------------------------------------------
[...truncated 350.79 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e27bf5f90ec76a6d3eade9f59506d8a8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 23, 2022 6:55:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 23, 2022 6:55:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 23, 2022 6:55:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 23, 2022 6:55:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 6:55:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 6:55:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 23, 2022 6:55:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 6:55:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 6:55:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 23, 2022 6:56:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@356279811]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 23, 2022 6:56:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 23, 2022 6:56:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 6:56:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 23, 2022 6:56:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 23, 2022 6:56:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 6:56:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 23, 2022 6:56:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@474324008]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 23, 2022 6:56:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 6:56:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 6:56:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 23, 2022 6:56:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 6:56:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 6:56:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 23, 2022 6:56:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 23, 2022 6:56:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 23, 2022 6:56:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 23, 2022 6:56:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 23, 2022 6:56:11 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 23, 2022 6:56:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6054765032214276801.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-K8Rt70KjRXrYgfaJpUP0KexhpBZ_lzLoX1RfYZrtefs.jar
    Feb 23, 2022 6:56:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 23, 2022 6:56:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 23, 2022 6:56:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 3e45803933c896f0f0878081b780eaa46c83cd66cbc21154ca3c3bd887d48492> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-PkWAOTPIlvDwh4CBt4DqpGyDzWbLwhFUyjw72IfUhJI.pb
    Feb 23, 2022 6:56:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 23, 2022 6:56:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 23, 2022 6:56:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 23, 2022 6:56:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 23, 2022 6:56:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-23_10_56_15-11617412799719060417?project=apache-beam-testing
    Feb 23, 2022 6:56:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-23_10_56_15-11617412799719060417
    Feb 23, 2022 6:56:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-23_10_56_15-11617412799719060417
    Feb 23, 2022 6:56:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-23T18:56:17.672Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 23, 2022 6:56:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:56:30.991Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 23, 2022 6:56:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:56:31.877Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 23, 2022 6:56:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:56:31.912Z: Expanding GroupByKey operations into optimizable parts.
    Feb 23, 2022 6:56:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:56:31.951Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 23, 2022 6:56:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:56:32.023Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 23, 2022 6:56:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:56:32.051Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 23, 2022 6:56:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:56:32.083Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 23, 2022 6:56:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:56:32.437Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 23, 2022 6:56:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:56:32.523Z: Starting 5 workers in us-central1-a...
    Feb 23, 2022 6:57:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:57:03.297Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 23, 2022 6:57:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:57:15.528Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 23, 2022 6:57:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:57:43.092Z: Workers have started successfully.
    Feb 23, 2022 6:57:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:57:43.116Z: Workers have started successfully.
    Feb 23, 2022 6:58:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:58:13.922Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 23, 2022 6:58:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:58:14.187Z: Cleaning up.
    Feb 23, 2022 6:58:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T18:58:14.272Z: Stopping worker pool...
    Feb 23, 2022 7:00:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T19:00:41.536Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 23, 2022 7:00:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T19:00:41.592Z: Worker pool stopped.
    Feb 23, 2022 7:00:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-23_10_56_15-11617412799719060417 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c5aebdc5-09bf-4d96-8a47-820a5d01da92 and timestamp: 2022-02-23T19:00:47.338000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.797

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 23, 2022 7:00:47 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 56.824 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 52s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/j55f24e7xtl7w

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3061

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3061/display/redirect>

Changes:


------------------------------------------
[...truncated 346.05 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e27bf5f90ec76a6d3eade9f59506d8a8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 23, 2022 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 23, 2022 12:45:02 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 23, 2022 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 23, 2022 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 23, 2022 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 23, 2022 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@356279811]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 23, 2022 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 23, 2022 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 23, 2022 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 23, 2022 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 23, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@43951584]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 23, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 23, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 23, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 23, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 23, 2022 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 23, 2022 12:45:15 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 23, 2022 12:45:15 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 23, 2022 12:45:15 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3919764840638282034.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-2pIsOjLOJHvlFD4yXkAFImtzlcqhDwLEjs_1S0EYgxE.jar
    Feb 23, 2022 12:45:16 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 23, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 23, 2022 12:45:16 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 1a333a4036233a411f69c7e428d57ce6365763faff45b2c8f896b2618ff3f9ec> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-GjM6QDYjOkEfacfkKNV85jZXY_r_RbLI-JayYY_z-ew.pb
    Feb 23, 2022 12:45:19 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 23, 2022 12:45:19 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 23, 2022 12:45:19 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 23, 2022 12:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 23, 2022 12:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-23_04_45_20-3651517892477464191?project=apache-beam-testing
    Feb 23, 2022 12:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-23_04_45_20-3651517892477464191
    Feb 23, 2022 12:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-23_04_45_20-3651517892477464191
    Feb 23, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-23T12:45:22.044Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 23, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:45:34.786Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 23, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:45:35.474Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 23, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:45:35.510Z: Expanding GroupByKey operations into optimizable parts.
    Feb 23, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:45:35.564Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 23, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:45:35.642Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 23, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:45:35.664Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 23, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:45:35.695Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 23, 2022 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:45:36.064Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 23, 2022 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:45:36.132Z: Starting 5 workers in us-central1-f...
    Feb 23, 2022 12:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:45:41.765Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 23, 2022 12:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:46:18.649Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 23, 2022 12:46:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:46:44.008Z: Workers have started successfully.
    Feb 23, 2022 12:46:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:46:44.043Z: Workers have started successfully.
    Feb 23, 2022 12:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:47:18.112Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 23, 2022 12:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:47:18.277Z: Cleaning up.
    Feb 23, 2022 12:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:47:18.401Z: Stopping worker pool...
    Feb 23, 2022 12:49:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:49:44.678Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 23, 2022 12:49:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T12:49:44.738Z: Worker pool stopped.
    Feb 23, 2022 12:49:50 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-23_04_45_20-3651517892477464191 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5fc12243-44a5-4c2c-b2de-6a66b57e7151 and timestamp: 2022-02-23T12:49:50.315000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.316

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 23, 2022 12:49:50 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 55.995 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 26s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/55sunvacf4dpq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3060

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3060/display/redirect?page=changes>

Changes:

[Kyle Weaver] [BEAM-13796] projection pushdown in BQ IO

[Kyle Weaver] [BEAM-13796] Move test to ReadTest class and correct javadoc for

[Kyle Weaver] [BEAM-13796] Pushdown is not supported on TypedRead#fromQuery.


------------------------------------------
[...truncated 348.17 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e27bf5f90ec76a6d3eade9f59506d8a8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 8'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 8'
Successfully started process 'Gradle Test Executor 8'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 23, 2022 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 23, 2022 6:44:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 23, 2022 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 23, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@356279811]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@474324008]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 23, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 23, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 23, 2022 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 23, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 23, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 23, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5804267065816331325.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-fwqzbW8yKdNwVKKmhk3jWH_ebrh6asnnhjwtS2Usn_g.jar
    Feb 23, 2022 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 23, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 23, 2022 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143945 bytes, hash dc71173597e2b2ff680aa67b1a82d2ae205138d6a06c01da8dcf2dc4b35e52cf> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-3HEXNZfisv9oCqZ7GoLSriBRONagbAHajc8txLNeUs8.pb
    Feb 23, 2022 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 23, 2022 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 23, 2022 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 23, 2022 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 23, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-22_22_45_13-15572912199192652200?project=apache-beam-testing
    Feb 23, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-22_22_45_13-15572912199192652200
    Feb 23, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-22_22_45_13-15572912199192652200
    Feb 23, 2022 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-23T06:45:16.189Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:45:24.327Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:45:24.941Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:45:24.978Z: Expanding GroupByKey operations into optimizable parts.
    Feb 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:45:25.016Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:45:25.073Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:45:25.102Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:45:25.137Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:45:25.528Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:45:25.620Z: Starting 5 workers in us-central1-f...
    Feb 23, 2022 6:45:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:45:50.649Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 23, 2022 6:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:46:17.486Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 23, 2022 6:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:46:42.209Z: Workers have started successfully.
    Feb 23, 2022 6:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:46:42.230Z: Workers have started successfully.
    Feb 23, 2022 6:47:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:47:17.233Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 23, 2022 6:47:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:47:17.386Z: Cleaning up.
    Feb 23, 2022 6:47:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:47:17.461Z: Stopping worker pool...
    Feb 23, 2022 6:49:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:49:43.675Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 23, 2022 6:49:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T06:49:43.722Z: Worker pool stopped.
    Feb 23, 2022 6:49:50 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-22_22_45_13-15572912199192652200 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5dcb465e-155c-4823-8677-7e1c8444ed08 and timestamp: 2022-02-23T06:49:50.344000000Z:
                     Metric:                    Value:
                   read_time                     8.694
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 23, 2022 6:49:50 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 8 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.004 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 57.046 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 30s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/n2ngzik64ycq4

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3059

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3059/display/redirect?page=changes>

Changes:

[noreply] Palo Alto case study (#16915)


------------------------------------------
[...truncated 346.30 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9e6e231f8b7414ad564de7ce80204b72
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 23, 2022 12:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 23, 2022 12:44:55 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 23, 2022 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 23, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 23, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 23, 2022 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 23, 2022 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 23, 2022 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 23, 2022 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 23, 2022 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 23, 2022 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 23, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 23, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 23, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 23, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 23, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 23, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 23, 2022 12:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 23, 2022 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 23, 2022 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 23, 2022 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4679651995106493559.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SzIHizGgRY72poe_IzBSbJuad-gzvwgQQmslW31xDJQ.jar
    Feb 23, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 3 seconds
    Feb 23, 2022 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 23, 2022 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 7699c93295014ef92b3b6870b8bf7131ba0464baff40c1d9054b6219f510b62f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-dpnJMpUBTvkrO2hwuL9xMboEZLr_QMHZBUtiGfUQti8.pb
    Feb 23, 2022 12:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 23, 2022 12:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 23, 2022 12:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 23, 2022 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 23, 2022 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-22_16_45_13-8790533879396625581?project=apache-beam-testing
    Feb 23, 2022 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-22_16_45_13-8790533879396625581
    Feb 23, 2022 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-22_16_45_13-8790533879396625581
    Feb 23, 2022 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-23T00:45:16.097Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 23, 2022 12:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:45:24.727Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 23, 2022 12:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:45:25.269Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 23, 2022 12:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:45:25.308Z: Expanding GroupByKey operations into optimizable parts.
    Feb 23, 2022 12:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:45:25.338Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 23, 2022 12:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:45:25.418Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 23, 2022 12:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:45:25.445Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 23, 2022 12:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:45:25.478Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 23, 2022 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:45:25.786Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 23, 2022 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:45:25.871Z: Starting 5 workers in us-central1-a...
    Feb 23, 2022 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:45:46.054Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 23, 2022 12:46:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:46:11.903Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 23, 2022 12:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:46:37.130Z: Workers have started successfully.
    Feb 23, 2022 12:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:46:37.159Z: Workers have started successfully.
    Feb 23, 2022 12:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:47:12.262Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 23, 2022 12:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:47:12.401Z: Cleaning up.
    Feb 23, 2022 12:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:47:12.488Z: Stopping worker pool...
    Feb 23, 2022 12:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:49:38.450Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 23, 2022 12:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-23T00:49:38.514Z: Worker pool stopped.
    Feb 23, 2022 12:49:46 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-22_16_45_13-8790533879396625581 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c95e2c17-cd9a-4c12-9077-0dca8590e3c4 and timestamp: 2022-02-23T00:49:46.094000000Z:
                     Metric:                    Value:
                   read_time                    13.117
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 23, 2022 12:49:46 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 55.859 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 24s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ljqczgabo6rsw

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3058

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3058/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13738] Reenable ignored SQS test after bumping elasticmq for fixed

[noreply] fix build status link (#16907)

[noreply] Merge pull request #16549 from [BEAM-13681][Playground] Refactoring

[noreply] Merge pull request #16732 from [BEAM-13825] [Playground] updated

[noreply] Merge pull request #16683 from [BEAM-13713][Playground] Java graph

[noreply] case study pages - improvements and fixes (#16896)


------------------------------------------
[...truncated 345.74 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9e6e231f8b7414ad564de7ce80204b72
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 22, 2022 6:44:45 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 22, 2022 6:44:46 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 22, 2022 6:44:47 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 22, 2022 6:44:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 6:44:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 6:44:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 22, 2022 6:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 6:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 6:44:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 22, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 22, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 22, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 22, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 22, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 22, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 22, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 6:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 22, 2022 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 6:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 22, 2022 6:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 22, 2022 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 22, 2022 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 22, 2022 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 22, 2022 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 22, 2022 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7391406736888412450.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-lx91N040dYmHB2oAzWDMlu6yCu00fI2IVQblIx5v1jw.jar
    Feb 22, 2022 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 1 seconds
    Feb 22, 2022 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 22, 2022 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash f83f8f8fc7baabf02bb5174e81a620101786555dedc3d0183962e54838a5ece4> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline--D-Pj8e6q_ArtRdOgaYgEBeGVV3tw9AYOWLlSDil7OQ.pb
    Feb 22, 2022 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 22, 2022 6:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 22, 2022 6:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 22, 2022 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 22, 2022 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-22_10_45_04-6253713071931415906?project=apache-beam-testing
    Feb 22, 2022 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-22_10_45_04-6253713071931415906
    Feb 22, 2022 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-22_10_45_04-6253713071931415906
    Feb 22, 2022 6:45:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-22T18:45:06.181Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 22, 2022 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:45:15.840Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 22, 2022 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:45:16.519Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 22, 2022 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:45:16.575Z: Expanding GroupByKey operations into optimizable parts.
    Feb 22, 2022 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:45:16.616Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 22, 2022 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:45:16.715Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 22, 2022 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:45:16.764Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 22, 2022 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:45:16.798Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 22, 2022 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:45:17.387Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 22, 2022 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:45:17.490Z: Starting 5 workers in us-central1-f...
    Feb 22, 2022 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:45:21.749Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 22, 2022 6:46:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:46:03.585Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 22, 2022 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:46:29.837Z: Workers have started successfully.
    Feb 22, 2022 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:46:29.877Z: Workers have started successfully.
    Feb 22, 2022 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:47:02.198Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 22, 2022 6:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:47:02.544Z: Cleaning up.
    Feb 22, 2022 6:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:47:02.710Z: Stopping worker pool...
    Feb 22, 2022 6:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:49:28.969Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 22, 2022 6:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T18:49:29.036Z: Worker pool stopped.
    Feb 22, 2022 6:49:37 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-22_10_45_04-6253713071931415906 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): dcf4255e-6acd-4550-8166-3ee9dc85422e and timestamp: 2022-02-22T18:49:37.235000000Z:
                     Metric:                    Value:
                   read_time                      8.39
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 22, 2022 6:49:37 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.008 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.006 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker Thread 2,5,main]) completed. Took 4 mins 55.282 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 8s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/2dy5ujni3e4pa

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3057

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3057/display/redirect>

Changes:


------------------------------------------
[...truncated 350.28 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 22, 2022 1:03:19 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 22, 2022 1:03:20 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 22, 2022 1:03:21 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 22, 2022 1:03:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 1:03:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 1:03:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 22, 2022 1:03:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 1:03:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 1:03:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 22, 2022 1:03:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 22, 2022 1:03:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 22, 2022 1:03:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 1:03:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 22, 2022 1:03:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 22, 2022 1:03:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 1:03:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 22, 2022 1:03:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 22, 2022 1:03:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 1:03:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 1:03:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 22, 2022 1:03:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 1:03:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 1:03:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 22, 2022 1:03:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 22, 2022 1:03:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 22, 2022 1:03:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 22, 2022 1:03:32 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 22, 2022 1:03:32 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 22, 2022 1:03:32 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7988801380542459306.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-KActP9GbUbHmmrW-7Jib7DWZBMHHHzOmbmI2gXbyu2M.jar
    Feb 22, 2022 1:03:33 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 22, 2022 1:03:33 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 22, 2022 1:03:33 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143944 bytes, hash d3c4d4833ab866fab4a5e0453dffb94077ab66de107a8765ecf62e248200c224> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-08TUgzq4Zvq0peBFPf-5QHerZt4Qeodl7PYuJIIAwiQ.pb
    Feb 22, 2022 1:03:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 22, 2022 1:03:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 22, 2022 1:03:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 22, 2022 1:03:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 22, 2022 1:03:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-22_05_03_37-17418684589572023763?project=apache-beam-testing
    Feb 22, 2022 1:03:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-22_05_03_37-17418684589572023763
    Feb 22, 2022 1:03:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-22_05_03_37-17418684589572023763
    Feb 22, 2022 1:03:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-22T13:03:39.027Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 22, 2022 1:03:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:03:47.727Z: Worker configuration: e2-standard-2 in us-central1-b.
    Feb 22, 2022 1:03:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:03:48.380Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 22, 2022 1:03:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:03:48.419Z: Expanding GroupByKey operations into optimizable parts.
    Feb 22, 2022 1:03:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:03:48.455Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 22, 2022 1:03:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:03:48.528Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 22, 2022 1:03:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:03:48.561Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 22, 2022 1:03:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:03:48.583Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 22, 2022 1:03:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:03:48.935Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 22, 2022 1:03:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:03:49.012Z: Starting 5 workers in us-central1-b...
    Feb 22, 2022 1:04:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:04:19.534Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 22, 2022 1:04:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:04:30.891Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 22, 2022 1:04:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:04:56.644Z: Workers have started successfully.
    Feb 22, 2022 1:04:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:04:56.702Z: Workers have started successfully.
    Feb 22, 2022 1:05:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:05:28.419Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 22, 2022 1:05:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:05:28.602Z: Cleaning up.
    Feb 22, 2022 1:05:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:05:28.679Z: Stopping worker pool...
    Feb 22, 2022 1:07:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:07:51.957Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 22, 2022 1:07:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T13:07:52.004Z: Worker pool stopped.
    Feb 22, 2022 1:07:58 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-22_05_03_37-17418684589572023763 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9442bc8c-f340-4c8c-a20b-8ad63f16eede and timestamp: 2022-02-22T13:07:58.767000000Z:
                     Metric:                    Value:
                   read_time                       9.8
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 22, 2022 1:07:58 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 44.251 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 43s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/6lznidpni2qbe

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3056

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3056/display/redirect>

Changes:


------------------------------------------
[...truncated 346.59 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9e6e231f8b7414ad564de7ce80204b72
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 22, 2022 6:44:49 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 22, 2022 6:44:50 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 22, 2022 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 22, 2022 6:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 6:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 6:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 22, 2022 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 22, 2022 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 22, 2022 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 22, 2022 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 22, 2022 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 22, 2022 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 22, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 22, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 22, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 22, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 22, 2022 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 22, 2022 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 22, 2022 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 22, 2022 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 22, 2022 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8632566665632030353.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-oU7V_5EH-8plkOORKzCqyvJURJ-7qxp-0Gb4EcqVes4.jar
    Feb 22, 2022 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 22, 2022 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 22, 2022 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 244d69fed01a375d5143381a9ecfabcd5a26a8024dcc81cd4cde379d07acc55c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-JE1p_tAaN11RQzgans-rzVomqAJNzIHNTN43nQesxVw.pb
    Feb 22, 2022 6:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 22, 2022 6:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 22, 2022 6:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 22, 2022 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 22, 2022 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-21_22_45_05-11808198246423772228?project=apache-beam-testing
    Feb 22, 2022 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-21_22_45_05-11808198246423772228
    Feb 22, 2022 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-21_22_45_05-11808198246423772228
    Feb 22, 2022 6:45:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-22T06:45:07.857Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 22, 2022 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:45:16.466Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 22, 2022 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:45:17.233Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 22, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:45:17.290Z: Expanding GroupByKey operations into optimizable parts.
    Feb 22, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:45:17.320Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 22, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:45:17.391Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 22, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:45:17.421Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 22, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:45:17.452Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 22, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:45:17.790Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 22, 2022 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:45:17.867Z: Starting 5 workers in us-central1-a...
    Feb 22, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:45:30.326Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 22, 2022 6:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:46:01.739Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 22, 2022 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:46:29.133Z: Workers have started successfully.
    Feb 22, 2022 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:46:29.161Z: Workers have started successfully.
    Feb 22, 2022 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:46:59.164Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 22, 2022 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:46:59.305Z: Cleaning up.
    Feb 22, 2022 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:46:59.370Z: Stopping worker pool...
    Feb 22, 2022 6:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:49:24.463Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 22, 2022 6:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T06:49:24.503Z: Worker pool stopped.
    Feb 22, 2022 6:49:29 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-21_22_45_05-11808198246423772228 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8b6d8737-8b3e-4193-82a4-4a5f25c3caf5 and timestamp: 2022-02-22T06:49:29.823000000Z:
                     Metric:                    Value:
                   read_time                     7.634
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 22, 2022 6:49:29 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 43.564 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/uqjxsb5xcqpz4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3055

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3055/display/redirect>

Changes:


------------------------------------------
[...truncated 358.91 KB...]
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 22, 2022 4:08:58 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 22, 2022 4:08:59 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 22, 2022 4:09:00 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 22, 2022 4:09:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 4:09:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 4:09:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 22, 2022 4:09:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 4:09:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 4:09:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 22, 2022 4:09:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 22, 2022 4:09:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 22, 2022 4:09:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 4:09:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 22, 2022 4:09:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 22, 2022 4:09:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 4:09:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 22, 2022 4:09:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 22, 2022 4:09:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 4:09:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 4:09:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 22, 2022 4:09:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 22, 2022 4:09:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 22, 2022 4:09:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 22, 2022 4:09:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 22, 2022 4:09:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 22, 2022 4:09:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 22, 2022 4:09:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 22, 2022 4:09:10 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 22, 2022 4:09:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test546009960365810956.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-85tCP1mrmUO5-PBsAoDQKNOIEPau-lEyOlCS6c3UhlY.jar
    Feb 22, 2022 4:09:11 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/jakarta.xml.bind/jakarta.xml.bind-api/2.3.3/48e3b9cfc10752fba3521d6511f4165bea951801/jakarta.xml.bind-api-2.3.3.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jakarta.xml.bind-api-2.3.3-wEU59HLppt0MdoXqgtZ3KCJpq457rKLhRQDjgeDGzsU.jar
    Feb 22, 2022 4:09:11 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 365 files cached, 2 files newly uploaded in 0 seconds
    Feb 22, 2022 4:09:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 22, 2022 4:09:11 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 3834a3e9a622a4beb041030f472ff419363839887e89738582be54efd324b790> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ODSj6aYipL6wQQMPRy_0GTY4OYh-iXOFgr5U79Mkt5A.pb
    Feb 22, 2022 4:09:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 22, 2022 4:09:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 22, 2022 4:09:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 22, 2022 4:09:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 22, 2022 4:09:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-21_20_09_14-10752225910844617811?project=apache-beam-testing
    Feb 22, 2022 4:09:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-21_20_09_14-10752225910844617811
    Feb 22, 2022 4:09:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-21_20_09_14-10752225910844617811
    Feb 22, 2022 4:09:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-22T04:09:17.270Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 22, 2022 4:09:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:09:29.268Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 22, 2022 4:09:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:09:30.060Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 22, 2022 4:09:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:09:30.091Z: Expanding GroupByKey operations into optimizable parts.
    Feb 22, 2022 4:09:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:09:30.142Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 22, 2022 4:09:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:09:30.205Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 22, 2022 4:09:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:09:30.248Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 22, 2022 4:09:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:09:30.277Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 22, 2022 4:09:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:09:30.632Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 22, 2022 4:09:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:09:30.722Z: Starting 5 workers in us-central1-a...
    Feb 22, 2022 4:09:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:09:39.668Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 22, 2022 4:10:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:10:14.820Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 22, 2022 4:10:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:10:40.329Z: Workers have started successfully.
    Feb 22, 2022 4:10:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:10:40.350Z: Workers have started successfully.
    Feb 22, 2022 4:11:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:11:07.707Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 22, 2022 4:11:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:11:07.879Z: Cleaning up.
    Feb 22, 2022 4:11:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:11:07.961Z: Stopping worker pool...
    Feb 22, 2022 4:13:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:13:32.891Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 22, 2022 4:13:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-22T04:13:32.952Z: Worker pool stopped.
    Feb 22, 2022 4:13:38 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-21_20_09_14-10752225910844617811 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e45fc8aa-c11d-4e7e-9568-b0964c6c9147 and timestamp: 2022-02-22T04:13:38.808000000Z:
                     Metric:                    Value:
                   read_time                     7.564
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 22, 2022 4:13:39 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.112 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.047 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 46.255 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 56s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/i4wwgnjpa6thq

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3054

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3054/display/redirect>

Changes:


------------------------------------------
[...truncated 352.11 KB...]
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 21, 2022 7:31:23 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 21, 2022 7:31:24 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 21, 2022 7:31:24 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 21, 2022 7:31:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 21, 2022 7:31:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 21, 2022 7:31:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 21, 2022 7:31:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 21, 2022 7:31:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 21, 2022 7:31:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 21, 2022 7:31:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 21, 2022 7:31:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 21, 2022 7:31:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 21, 2022 7:31:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 21, 2022 7:31:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 21, 2022 7:31:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 21, 2022 7:31:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 21, 2022 7:31:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 21, 2022 7:31:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 21, 2022 7:31:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 21, 2022 7:31:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 21, 2022 7:31:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 21, 2022 7:31:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 21, 2022 7:31:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 21, 2022 7:31:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 21, 2022 7:31:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 21, 2022 7:31:31 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 21, 2022 7:31:37 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 21, 2022 7:31:37 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 21, 2022 7:31:37 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1937776671834816061.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-pxu_6nzK3jRvTThOTzJmQOZ1QJoP1D1ytol5iPe7A04.jar
    Feb 21, 2022 7:31:38 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 1 seconds
    Feb 21, 2022 7:31:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 21, 2022 7:31:38 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash d36531e6351a5bb3025adc82dd4c1eb09ca65b32363929251562e47fcc64637c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-02Ux5jUaW7MCWtyC3UwesJymWzI2OSklFWLkf8xkY3w.pb
    Feb 21, 2022 7:31:41 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 21, 2022 7:31:42 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 21, 2022 7:31:42 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 21, 2022 7:31:42 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 21, 2022 7:31:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-21_11_31_42-6994291645730821454?project=apache-beam-testing
    Feb 21, 2022 7:31:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-21_11_31_42-6994291645730821454
    Feb 21, 2022 7:31:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-21_11_31_42-6994291645730821454
    Feb 21, 2022 7:31:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-21T19:31:46.589Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 21, 2022 7:31:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:31:55.324Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 21, 2022 7:31:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:31:56.124Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 21, 2022 7:31:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:31:56.149Z: Expanding GroupByKey operations into optimizable parts.
    Feb 21, 2022 7:31:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:31:56.187Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 21, 2022 7:31:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:31:56.249Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 21, 2022 7:31:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:31:56.275Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 21, 2022 7:31:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:31:56.299Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 21, 2022 7:31:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:31:56.649Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 21, 2022 7:31:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:31:56.737Z: Starting 5 workers in us-central1-a...
    Feb 21, 2022 7:32:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:32:08.012Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 21, 2022 7:32:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:32:45.299Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 21, 2022 7:33:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:33:11.826Z: Workers have started successfully.
    Feb 21, 2022 7:33:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:33:11.909Z: Workers have started successfully.
    Feb 21, 2022 7:33:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:33:40.460Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 21, 2022 7:33:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:33:40.627Z: Cleaning up.
    Feb 21, 2022 7:33:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:33:40.694Z: Stopping worker pool...
    Feb 21, 2022 7:36:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:36:07.767Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 21, 2022 7:36:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T19:36:07.813Z: Worker pool stopped.
    Feb 21, 2022 7:36:15 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-21_11_31_42-6994291645730821454 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9ae5a692-2019-44cf-a5dc-e643daae7276 and timestamp: 2022-02-21T19:36:15.365000000Z:
                     Metric:                    Value:
                   read_time                     7.882
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 21, 2022 7:36:15 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 55.94 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 42s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...

Publishing failed.

The build scan server appears to be unavailable.
Please check https://status.gradle.com for the latest service status.

If the service is reported as available, please report this problem via https://gradle.com/help/plugin and include the following via copy/paste:

----------
Gradle version: 7.3.2
Plugin version: 3.4.1
Request URL: https://status.gradle.com
Request ID: 0434ef24-3c15-4068-83b7-055039d7b59d
Response status code: 405
Response server type: Varnish
----------

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3053

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3053/display/redirect>

Changes:


------------------------------------------
[...truncated 353.57 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 21, 2022 6:23:24 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 21, 2022 6:23:27 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 21, 2022 6:23:29 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 21, 2022 6:23:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 21, 2022 6:23:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 21, 2022 6:23:38 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 21, 2022 6:23:38 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 21, 2022 6:23:38 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 21, 2022 6:23:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 21, 2022 6:23:40 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 21, 2022 6:23:40 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 21, 2022 6:23:40 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 21, 2022 6:23:41 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 21, 2022 6:23:41 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 21, 2022 6:23:41 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 21, 2022 6:23:41 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 21, 2022 6:23:41 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1998372115]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 21, 2022 6:23:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 21, 2022 6:23:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 21, 2022 6:23:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 21, 2022 6:23:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 21, 2022 6:23:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 21, 2022 6:23:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 21, 2022 6:23:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 21, 2022 6:23:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 21, 2022 6:23:43 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 21, 2022 6:23:51 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 21, 2022 6:23:51 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 21, 2022 6:23:52 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6160826667460006633.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-kQc-pD47YiXLgw_pvQT-fnLx2yYGf3RtO1MI_4flyOc.jar
    Feb 21, 2022 6:23:55 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 4 seconds
    Feb 21, 2022 6:23:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 21, 2022 6:23:55 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143942 bytes, hash 75242e125b7f09fbfedf8e11e13b7fecb5981ee7776c6c0a299058ac47966a24> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-dSQuElt_Cfv-344R4Tt_7LWYHud3bGwKKZBYrEeWaiQ.pb
    Feb 21, 2022 6:24:01 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 21, 2022 6:24:01 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 21, 2022 6:24:01 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 21, 2022 6:24:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 21, 2022 6:24:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-21_10_24_01-14050824939732373948?project=apache-beam-testing
    Feb 21, 2022 6:24:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-21_10_24_01-14050824939732373948
    Feb 21, 2022 6:24:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-21_10_24_01-14050824939732373948
    Feb 21, 2022 6:24:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-21T18:24:03.893Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 21, 2022 6:24:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:24:13.525Z: Worker configuration: e2-standard-2 in us-central1-b.
    Feb 21, 2022 6:24:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:24:14.215Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 21, 2022 6:24:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:24:14.249Z: Expanding GroupByKey operations into optimizable parts.
    Feb 21, 2022 6:24:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:24:14.301Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 21, 2022 6:24:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:24:14.353Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 21, 2022 6:24:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:24:14.377Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 21, 2022 6:24:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:24:14.401Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 21, 2022 6:24:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:24:14.649Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 21, 2022 6:24:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:24:14.730Z: Starting 5 workers in us-central1-b...
    Feb 21, 2022 6:24:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:24:36.875Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 21, 2022 6:24:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:24:50.599Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Feb 21, 2022 6:24:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:24:50.625Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Feb 21, 2022 6:25:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:25:00.881Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 21, 2022 6:25:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:25:24.949Z: Workers have started successfully.
    Feb 21, 2022 6:25:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:25:24.975Z: Workers have started successfully.
    Feb 21, 2022 6:25:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:25:58.179Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 21, 2022 6:25:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:25:58.327Z: Cleaning up.
    Feb 21, 2022 6:26:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:25:58.395Z: Stopping worker pool...
    Feb 21, 2022 6:28:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:28:33.648Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 21, 2022 6:28:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-21T18:28:33.699Z: Worker pool stopped.
    Feb 21, 2022 6:28:39 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-21_10_24_01-14050824939732373948 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9b64eb35-fbaf-41b1-b566-f6a33f075da0 and timestamp: 2022-02-21T18:28:39.772000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.237

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 21, 2022 6:28:39 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.038 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 5 mins 25.633 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 15s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/majlzmycutrjy

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3052

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3052/display/redirect>

Changes:


------------------------------------------
[...truncated 346.20 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9e6e231f8b7414ad564de7ce80204b72
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 19, 2022 6:44:48 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 19, 2022 6:44:49 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 19, 2022 6:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 19, 2022 6:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 6:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 19, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 6:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 19, 2022 6:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 19, 2022 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 19, 2022 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 6:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 19, 2022 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 19, 2022 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 6:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 19, 2022 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 19, 2022 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 19, 2022 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 19, 2022 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 19, 2022 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 19, 2022 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 19, 2022 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 19, 2022 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 19, 2022 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8867915646599246100.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-bqv4P5wC2DHvtVZ6EUa8XOe74kdqJN9-NtD-c3tsXZ4.jar
    Feb 19, 2022 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 19, 2022 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 19, 2022 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash cc393c77a0eb05413fa35bfbe3cd3bf8179e4f11e64004bd9fb755fd8b62235a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-zDk8d6DrBUE_o1v74807-BeeTxHmQAS9n7dV_YtiI1o.pb
    Feb 19, 2022 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 19, 2022 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 19, 2022 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 19, 2022 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 19, 2022 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-19_10_45_03-12639985800574476561?project=apache-beam-testing
    Feb 19, 2022 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-19_10_45_03-12639985800574476561
    Feb 19, 2022 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-19_10_45_03-12639985800574476561
    Feb 19, 2022 6:45:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-19T18:45:05.606Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 19, 2022 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:45:18.801Z: Worker configuration: e2-standard-2 in us-central1-b.
    Feb 19, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:45:19.523Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 19, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:45:19.565Z: Expanding GroupByKey operations into optimizable parts.
    Feb 19, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:45:19.593Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 19, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:45:19.653Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 19, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:45:19.691Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 19, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:45:19.721Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 19, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:45:20.319Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 19, 2022 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:45:20.472Z: Starting 5 workers in us-central1-b...
    Feb 19, 2022 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:45:26.875Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 19, 2022 6:46:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:46:06.848Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 19, 2022 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:46:35.344Z: Workers have started successfully.
    Feb 19, 2022 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:46:35.378Z: Workers have started successfully.
    Feb 19, 2022 6:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:47:09.711Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 19, 2022 6:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:47:09.887Z: Cleaning up.
    Feb 19, 2022 6:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:47:09.968Z: Stopping worker pool...
    Feb 19, 2022 6:49:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:49:38.884Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 19, 2022 6:49:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T18:49:38.934Z: Worker pool stopped.
    Feb 19, 2022 6:49:46 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-19_10_45_03-12639985800574476561 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c76b4f92-1211-4f06-9319-684e9e9d7f94 and timestamp: 2022-02-19T18:49:46.526000000Z:
                     Metric:                    Value:
                   read_time                    10.318
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 19, 2022 6:49:46 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 1.434 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 27s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/dvkyirnmxjji2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3051

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3051/display/redirect>

Changes:


------------------------------------------
[...truncated 346.75 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9e6e231f8b7414ad564de7ce80204b72
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 19, 2022 12:46:56 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 19, 2022 12:46:57 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 19, 2022 12:46:58 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 19, 2022 12:47:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 12:47:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 12:47:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 19, 2022 12:47:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 12:47:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 12:47:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 19, 2022 12:47:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 19, 2022 12:47:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 19, 2022 12:47:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 12:47:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 19, 2022 12:47:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 19, 2022 12:47:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 12:47:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 19, 2022 12:47:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 19, 2022 12:47:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 12:47:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 12:47:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 19, 2022 12:47:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 12:47:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 12:47:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 19, 2022 12:47:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 19, 2022 12:47:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 19, 2022 12:47:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 19, 2022 12:47:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 19, 2022 12:47:08 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 19, 2022 12:47:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5047833222512315003.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-FNyMLh1rRracykCHr3ZS8ofO3dk78fyj9IlcfhFgmA4.jar
    Feb 19, 2022 12:47:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 19, 2022 12:47:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 19, 2022 12:47:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 235e2fc3e5122ac5d52ac86013f166f8e2a580f734ab59fc165bdab947630c25> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-I14vw-USKsXVKshgE_Fm-OKlgPc0q1n8FlvauUdjDCU.pb
    Feb 19, 2022 12:47:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 19, 2022 12:47:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 19, 2022 12:47:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 19, 2022 12:47:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 19, 2022 12:47:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-19_04_47_12-4810030390417849784?project=apache-beam-testing
    Feb 19, 2022 12:47:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-19_04_47_12-4810030390417849784
    Feb 19, 2022 12:47:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-19_04_47_12-4810030390417849784
    Feb 19, 2022 12:47:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-19T12:47:14.929Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 19, 2022 12:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:47:27.667Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 19, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:47:28.349Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 19, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:47:28.413Z: Expanding GroupByKey operations into optimizable parts.
    Feb 19, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:47:28.472Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 19, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:47:28.604Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 19, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:47:28.641Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 19, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:47:28.674Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 19, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:47:29.474Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 19, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:47:29.587Z: Starting 5 workers in us-central1-f...
    Feb 19, 2022 12:47:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:47:50.308Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 19, 2022 12:48:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:48:15.906Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 19, 2022 12:48:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:48:39.989Z: Workers have started successfully.
    Feb 19, 2022 12:48:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:48:40.033Z: Workers have started successfully.
    Feb 19, 2022 12:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:49:11.009Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 19, 2022 12:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:49:11.320Z: Cleaning up.
    Feb 19, 2022 12:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:49:11.512Z: Stopping worker pool...
    Feb 19, 2022 12:51:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:51:45.549Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 19, 2022 12:51:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T12:51:45.594Z: Worker pool stopped.
    Feb 19, 2022 12:51:51 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-19_04_47_12-4810030390417849784 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 72ce5be3-9691-4a48-b832-db65f644af12 and timestamp: 2022-02-19T12:51:51.694000000Z:
                     Metric:                    Value:
                   read_time                     8.904
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 19, 2022 12:51:51 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 58.53 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 25s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/2aywopt7qlq4i

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3050

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3050/display/redirect>

Changes:


------------------------------------------
[...truncated 348.41 KB...]

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 19, 2022 6:46:29 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 19, 2022 6:46:30 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 19, 2022 6:46:30 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 19, 2022 6:46:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 6:46:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 6:46:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 19, 2022 6:46:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 6:46:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 6:46:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 19, 2022 6:46:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 19, 2022 6:46:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 19, 2022 6:46:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 6:46:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 19, 2022 6:46:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 19, 2022 6:46:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 6:46:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 19, 2022 6:46:36 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 19, 2022 6:46:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 6:46:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 6:46:36 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 19, 2022 6:46:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 6:46:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 6:46:36 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 19, 2022 6:46:36 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 19, 2022 6:46:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 19, 2022 6:46:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 19, 2022 6:46:41 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 19, 2022 6:46:42 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 19, 2022 6:46:42 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1969897705068797362.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-v7Jrd8LKvKffzNR5yCJodDBFR78bOyEvjysoXaQeRXo.jar
    Feb 19, 2022 6:46:45 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 3 seconds
    Feb 19, 2022 6:46:45 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 19, 2022 6:46:45 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 0e9333c8f958527048b3aec361f2795cedf68eae0917b80b580f64bde9736492> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-DpMzyPlYUnBIs67DYfJ5XO32jq4JF7gLWA9kvelzZJI.pb
    Feb 19, 2022 6:46:49 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 19, 2022 6:46:49 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 19, 2022 6:46:49 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 19, 2022 6:46:49 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 19, 2022 6:46:50 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-18_22_46_49-4590193070711586374?project=apache-beam-testing
    Feb 19, 2022 6:46:50 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-18_22_46_49-4590193070711586374
    Feb 19, 2022 6:46:50 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-18_22_46_49-4590193070711586374
    Feb 19, 2022 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-19T06:46:51.583Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 19, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:47:05.216Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 19, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:47:05.982Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 19, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:47:06.020Z: Expanding GroupByKey operations into optimizable parts.
    Feb 19, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:47:06.048Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 19, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:47:06.118Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 19, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:47:06.146Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 19, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:47:06.179Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 19, 2022 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:47:06.523Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 19, 2022 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:47:06.595Z: Starting 5 workers in us-central1-f...
    Feb 19, 2022 6:47:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:47:28.158Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 19, 2022 6:47:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:47:43.914Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Feb 19, 2022 6:47:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:47:43.947Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Feb 19, 2022 6:47:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:47:54.185Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 19, 2022 6:48:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:48:19.630Z: Workers have started successfully.
    Feb 19, 2022 6:48:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:48:19.657Z: Workers have started successfully.
    Feb 19, 2022 6:48:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:48:52.710Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 19, 2022 6:48:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:48:52.871Z: Cleaning up.
    Feb 19, 2022 6:48:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:48:52.948Z: Stopping worker pool...
    Feb 19, 2022 6:51:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:51:17.586Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 19, 2022 6:51:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T06:51:17.638Z: Worker pool stopped.
    Feb 19, 2022 6:51:25 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-18_22_46_49-4590193070711586374 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3502c614-573e-49c8-bea5-30742b9a1f23 and timestamp: 2022-02-19T06:51:25.523000000Z:
                     Metric:                    Value:
                   read_time                     8.555
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 19, 2022 6:51:25 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 5 mins 0.751 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/gh2f7pwbg2pwa

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3049

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3049/display/redirect?page=changes>

Changes:

[rogelio.hernandez] [BEAM-13051] Pylint misplaced-bare-raise warning enabled

[rogelio.hernandez] [BEAM-13051] Added descriptions to Kinesis and PortableRunner exceptions

[noreply] [BEAM-13812] Integrate DataprocClusterManager into Interactive

[noreply] [BEAM-12572] Fix failing python examples tests in Dataflow runner

[noreply] Remove build status from PR (#16902)


------------------------------------------
[...truncated 346.54 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9e6e231f8b7414ad564de7ce80204b72
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 19, 2022 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 19, 2022 12:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 19, 2022 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 19, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 19, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 19, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 19, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 19, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 19, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 19, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 19, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@635097653]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 19, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 19, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 19, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 19, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 19, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 19, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 19, 2022 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 19, 2022 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 19, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 19, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4017507760920701202.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-tmOE4oPpOW09Pwvglaql4svxnAI58ppkeGRDvuqi28w.jar
    Feb 19, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 19, 2022 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 19, 2022 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 231b3455cd5852fa9b3619f8235fbb4fa4588b3811d3252a75d47df5727e23f3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Ixs0Vc1YUvqbNhn4I1-7T6RYizgR0yUqddR99XJ-I_M.pb
    Feb 19, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 19, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 19, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 19, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 19, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-18_16_45_06-6056826491813198964?project=apache-beam-testing
    Feb 19, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-18_16_45_06-6056826491813198964
    Feb 19, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-18_16_45_06-6056826491813198964
    Feb 19, 2022 12:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-19T00:45:08.415Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 19, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:45:20.166Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 19, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:45:20.884Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 19, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:45:20.934Z: Expanding GroupByKey operations into optimizable parts.
    Feb 19, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:45:20.960Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 19, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:45:21.035Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 19, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:45:21.073Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 19, 2022 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:45:21.096Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 19, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:45:21.502Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 19, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:45:21.577Z: Starting 5 workers in us-central1-a...
    Feb 19, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:45:26.456Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 19, 2022 12:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:46:06.770Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 19, 2022 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:46:31.799Z: Workers have started successfully.
    Feb 19, 2022 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:46:31.832Z: Workers have started successfully.
    Feb 19, 2022 12:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:47:05.478Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 19, 2022 12:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:47:05.667Z: Cleaning up.
    Feb 19, 2022 12:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:47:05.755Z: Stopping worker pool...
    Feb 19, 2022 12:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:49:38.713Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 19, 2022 12:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-19T00:49:38.768Z: Worker pool stopped.
    Feb 19, 2022 12:49:46 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-18_16_45_06-6056826491813198964 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 27bb5333-e2e0-4ae3-ac73-d5bab6bccb78 and timestamp: 2022-02-19T00:49:46.583000000Z:
                     Metric:                    Value:
                   read_time                     8.199
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 19, 2022 12:49:46 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 58.804 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 23s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/mmpekav2qwd7e

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3048

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3048/display/redirect?page=changes>

Changes:

[thiagotnunes] fix: fix bug when retrieving either string or json


------------------------------------------
[...truncated 348.75 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 18, 2022 6:54:35 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 18, 2022 6:54:36 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 18, 2022 6:54:37 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 18, 2022 6:54:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 18, 2022 6:54:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 6:54:40 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 18, 2022 6:54:40 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 18, 2022 6:54:40 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 6:54:41 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 18, 2022 6:54:41 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 18, 2022 6:54:41 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 18, 2022 6:54:41 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 6:54:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 18, 2022 6:54:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 18, 2022 6:54:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 6:54:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 18, 2022 6:54:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 18, 2022 6:54:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 18, 2022 6:54:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 6:54:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 18, 2022 6:54:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 18, 2022 6:54:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 6:54:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 18, 2022 6:54:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 18, 2022 6:54:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 18, 2022 6:54:43 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 18, 2022 6:54:48 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 18, 2022 6:54:48 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 18, 2022 6:54:48 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7463622745768348350.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-G9-jdC1BqNvWk3sJPr96e3EmrvosjO-zkjhAXHGXTgI.jar
    Feb 18, 2022 6:54:48 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 18, 2022 6:54:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 18, 2022 6:54:48 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143946 bytes, hash c8b190ec2dbaf2373303dca2ea3e0a19ace405e5639e139bac73507f6420ebf6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-yLGQ7C268jczA9yi6j4KGazkBeVjnhObrHNQf2Qg6_Y.pb
    Feb 18, 2022 6:54:51 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 18, 2022 6:54:51 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 18, 2022 6:54:52 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 18, 2022 6:54:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 18, 2022 6:54:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-18_10_54_52-14574912835836680922?project=apache-beam-testing
    Feb 18, 2022 6:54:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-18_10_54_52-14574912835836680922
    Feb 18, 2022 6:54:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-18_10_54_52-14574912835836680922
    Feb 18, 2022 6:54:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-18T18:54:54.312Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 18, 2022 6:55:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:55:07.509Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 18, 2022 6:55:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:55:08.343Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 18, 2022 6:55:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:55:08.389Z: Expanding GroupByKey operations into optimizable parts.
    Feb 18, 2022 6:55:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:55:08.420Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 18, 2022 6:55:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:55:08.506Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 18, 2022 6:55:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:55:08.534Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 18, 2022 6:55:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:55:08.559Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 18, 2022 6:55:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:55:08.973Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 18, 2022 6:55:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:55:09.044Z: Starting 5 workers in us-central1-f...
    Feb 18, 2022 6:55:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:55:20.866Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 18, 2022 6:55:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:55:46.303Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Feb 18, 2022 6:55:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:55:46.322Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Feb 18, 2022 6:55:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:55:56.626Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 18, 2022 6:56:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:56:20.062Z: Workers have started successfully.
    Feb 18, 2022 6:56:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:56:20.116Z: Workers have started successfully.
    Feb 18, 2022 6:56:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:56:56.507Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 18, 2022 6:56:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:56:56.643Z: Cleaning up.
    Feb 18, 2022 6:56:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:56:56.721Z: Stopping worker pool...
    Feb 18, 2022 6:59:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:59:22.074Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 18, 2022 6:59:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T18:59:22.148Z: Worker pool stopped.
    Feb 18, 2022 6:59:30 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-18_10_54_52-14574912835836680922 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 12aa4af3-de58-492c-8159-fdfcb283a477 and timestamp: 2022-02-18T18:59:30.442000000Z:
                     Metric:                    Value:
                   read_time                     9.326
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 18, 2022 6:59:30 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 57.953 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 43s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/xiayd32tvxd4c

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3047

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3047/display/redirect?page=changes>

Changes:

[noreply] [adhoc] Avoid using SerializablePipelineOptions for testing to minimize


------------------------------------------
[...truncated 357.95 KB...]
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 18, 2022 12:47:19 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 18, 2022 12:47:19 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 18, 2022 12:47:20 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 18, 2022 12:47:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 18, 2022 12:47:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 12:47:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 18, 2022 12:47:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 18, 2022 12:47:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 12:47:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 18, 2022 12:47:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 18, 2022 12:47:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 18, 2022 12:47:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 12:47:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 18, 2022 12:47:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 18, 2022 12:47:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 12:47:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 18, 2022 12:47:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 18, 2022 12:47:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 18, 2022 12:47:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 12:47:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 18, 2022 12:47:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 18, 2022 12:47:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 12:47:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 18, 2022 12:47:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 18, 2022 12:47:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 18, 2022 12:47:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 18, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 18, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 18, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2481688494752413731.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-5fVz95esezcfXIoJfPcTAqOcTLx3Y_0bRdEnzB2ofsg.jar
    Feb 18, 2022 12:47:32 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.dataformat/jackson-dataformat-xml/2.13.0/396634365e8439b27794fa94b571fdc03b4cf7be/jackson-dataformat-xml-2.13.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jackson-dataformat-xml-2.13.0-Yyu4WPqE_RYmgacP7BZkHcWrY2pzow2igQ0noIVmu7o.jar
    Feb 18, 2022 12:47:32 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.immutables/value/2.8.8/d99fa1e04af5a1fda42fa9412d68eb7fe17a1071/value-2.8.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/value-2.8.8-k_DQ4kEt4FHC2WzIqOzB87qR2WH_Mw81deYbQ6-miFA.jar
    Feb 18, 2022 12:47:32 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.woodstox/woodstox-core/6.2.6/d4a055af5dcadea8d1bded5b64bc7be5bb7fea95/woodstox-core-6.2.6.jar to gs://temp-storage-for-perf-tests/loadtests/staging/woodstox-core-6.2.6-7RMZid5VnxZ00HVSgs8J3XgTkSFBRgr4IvXvHJtaCuE.jar
    Feb 18, 2022 12:47:32 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.inject.extensions/guice-servlet/3.0/610cde0e8da5a8b7d8efb8f0b8987466ffebaaf9/guice-servlet-3.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/guice-servlet-3.0-nnKkuFgoiNU8L0KX6TJ2o8FMgogBJEkPLaexap3xxhg.jar
    Feb 18, 2022 12:47:32 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mortbay.jetty/servlet-api/2.5-20081211/22bff70037e1e6fa7e6413149489552ee2064702/servlet-api-2.5-20081211.jar to gs://temp-storage-for-perf-tests/loadtests/staging/servlet-api-2.5-20081211-BodWCWmW_gD2BKw7ZnLW9mPcd36kqDBW4kDQRW535HI.jar
    Feb 18, 2022 12:47:33 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 6 files newly uploaded in 1 seconds
    Feb 18, 2022 12:47:33 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 18, 2022 12:47:33 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 4b3683e014a984eed06026d3232a01a13a063ad4a903d10f0f2183a661992bff> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-SzaD4BSphO7QYCbTIyoBoToGOtSpA9EPDyGDpmGZK_8.pb
    Feb 18, 2022 12:47:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 18, 2022 12:47:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 18, 2022 12:47:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 18, 2022 12:47:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 18, 2022 12:47:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-18_04_47_36-18077157188863789953?project=apache-beam-testing
    Feb 18, 2022 12:47:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-18_04_47_36-18077157188863789953
    Feb 18, 2022 12:47:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-18_04_47_36-18077157188863789953
    Feb 18, 2022 12:47:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-18T12:47:38.781Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 18, 2022 12:47:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:47:50.893Z: Worker configuration: e2-standard-2 in us-central1-b.
    Feb 18, 2022 12:47:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:47:51.460Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 18, 2022 12:47:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:47:51.519Z: Expanding GroupByKey operations into optimizable parts.
    Feb 18, 2022 12:47:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:47:51.555Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 18, 2022 12:47:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:47:51.660Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 18, 2022 12:47:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:47:51.712Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 18, 2022 12:47:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:47:51.758Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 18, 2022 12:47:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:47:52.338Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 18, 2022 12:47:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:47:52.430Z: Starting 5 workers in us-central1-b...
    Feb 18, 2022 12:48:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:48:11.560Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 18, 2022 12:48:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:48:33.379Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 18, 2022 12:49:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:48:59.414Z: Workers have started successfully.
    Feb 18, 2022 12:49:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:48:59.444Z: Workers have started successfully.
    Feb 18, 2022 12:49:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:49:33.031Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 18, 2022 12:49:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:49:33.401Z: Cleaning up.
    Feb 18, 2022 12:49:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:49:33.561Z: Stopping worker pool...
    Feb 18, 2022 12:52:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:52:06.499Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 18, 2022 12:52:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T12:52:06.623Z: Worker pool stopped.
    Feb 18, 2022 12:52:21 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-18_04_47_36-18077157188863789953 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 47ce70bb-e6d7-4a2f-b8c5-54805ed58c4c and timestamp: 2022-02-18T12:52:21.793000000Z:
                     Metric:                    Value:
                   read_time                    10.176
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 18, 2022 12:52:21 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 5 mins 6.866 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 14s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/4ywbxptm6u5xw

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3046

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3046/display/redirect?page=changes>

Changes:

[noreply] [release-2.36.0] Fix pickler argument for 2.36 blog (#16774)


------------------------------------------
[...truncated 347.00 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 51ae16472e3a7081ab3f3cb97ce1744a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 18, 2022 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 18, 2022 6:45:27 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 18, 2022 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 18, 2022 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 18, 2022 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 18, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 18, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 18, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 18, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 18, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 18, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 18, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 18, 2022 6:45:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@635097653]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 18, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 18, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 18, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 18, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 18, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 18, 2022 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 18, 2022 6:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 18, 2022 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 18, 2022 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 18, 2022 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4159604686206536324.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-9rCmjL0z3ybPMNiH23SGlYn_kalMueV3JJYIaf26oNU.jar
    Feb 18, 2022 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 18, 2022 6:45:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 18, 2022 6:45:38 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 4a048ca9a265f8403d17231cdffac6754b7afaea9875c9617e97b0e72a5668dc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-SgSMqaJl-EA9FyMc3_rGdUt6-uqYdclhfpew5ypWaNw.pb
    Feb 18, 2022 6:45:41 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 18, 2022 6:45:41 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 18, 2022 6:45:41 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 18, 2022 6:45:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 18, 2022 6:45:42 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-17_22_45_41-12710517370446814408?project=apache-beam-testing
    Feb 18, 2022 6:45:42 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-17_22_45_41-12710517370446814408
    Feb 18, 2022 6:45:42 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-17_22_45_41-12710517370446814408
    Feb 18, 2022 6:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-18T06:45:43.595Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 18, 2022 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:45:54.466Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 18, 2022 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:45:56.591Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 18, 2022 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:45:56.629Z: Expanding GroupByKey operations into optimizable parts.
    Feb 18, 2022 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:45:56.656Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 18, 2022 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:45:56.730Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 18, 2022 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:45:56.763Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 18, 2022 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:45:56.797Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 18, 2022 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:45:57.194Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 18, 2022 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:45:57.266Z: Starting 5 workers in us-central1-a...
    Feb 18, 2022 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:45:57.732Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 18, 2022 6:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:46:41.936Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 18, 2022 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:47:07.472Z: Workers have started successfully.
    Feb 18, 2022 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:47:07.500Z: Workers have started successfully.
    Feb 18, 2022 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:47:37.993Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 18, 2022 6:47:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:47:38.170Z: Cleaning up.
    Feb 18, 2022 6:47:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:47:38.255Z: Stopping worker pool...
    Feb 18, 2022 6:50:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:49:59.890Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 18, 2022 6:50:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T06:49:59.943Z: Worker pool stopped.
    Feb 18, 2022 6:50:07 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-17_22_45_41-12710517370446814408 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b0283f8b-569e-4e5e-b7a5-a520670f64db and timestamp: 2022-02-18T06:50:07.793000000Z:
                     Metric:                    Value:
                   read_time                     7.561
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 18, 2022 6:50:08 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 45.288 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/2l62udibtzkk4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3045

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3045/display/redirect?page=changes>

Changes:

[Pablo Estrada] Simplify README for new users

[mmack] [BEAM-13563] Introducing common AWS ClientBuilderFactory to unify

[laraschmidt] Fix final allowskew error to properly handle a large allowedSkew

[noreply] [BEAM-13946] Add get_dummies(), a non-deferred column operation on


------------------------------------------
[...truncated 359.60 KB...]
    Feb 18, 2022 12:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 18, 2022 12:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 12:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 18, 2022 12:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 18, 2022 12:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 18, 2022 12:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 18, 2022 12:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 18, 2022 12:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 18, 2022 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 18, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 18, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 18, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4170251476782595843.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-eBiO0EuZHKn1jFtmVPMQ35pvomJspxWwqATHzwKryLg.jar
    Feb 18, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 18, 2022 12:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 18, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143938 bytes, hash d0a0df9cc70689a19f777ce2a31aeb2ba2bb1d465054b29be0f05d1b1c9ba712> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-0KDfnMcGiaGfd3zioxrrK6K7HUZQVLKb4PBdGxybpxI.pb
    Feb 18, 2022 12:45:27 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 18, 2022 12:45:27 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 18, 2022 12:45:27 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 18, 2022 12:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 18, 2022 12:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-17_16_45_27-7594338871772142847?project=apache-beam-testing
    Feb 18, 2022 12:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-17_16_45_27-7594338871772142847
    Feb 18, 2022 12:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-17_16_45_27-7594338871772142847
    Feb 18, 2022 12:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-18T00:45:29.765Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 18, 2022 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:45:40.659Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 18, 2022 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:45:41.535Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 18, 2022 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:45:41.587Z: Expanding GroupByKey operations into optimizable parts.
    Feb 18, 2022 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:45:41.615Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 18, 2022 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:45:41.700Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 18, 2022 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:45:41.734Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 18, 2022 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:45:41.767Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 18, 2022 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:45:42.199Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 18, 2022 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:45:42.289Z: Starting 5 workers in us-central1-c...
    Feb 18, 2022 12:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:46:03.942Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 18, 2022 12:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:46:22.712Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 18, 2022 12:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:46:47.321Z: Workers have started successfully.
    Feb 18, 2022 12:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:46:47.369Z: Workers have started successfully.
    Feb 18, 2022 12:47:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-18T00:47:17.218Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDBXalQ2UlZGOGJxSxoCamQaAmly/streams/CAQaAmpkGgJpciCFmJvyAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDBXalQ2UlZGOGJxSxoCamQaAmly/streams/CAQaAmpkGgJpciCFmJvyAygC': offset 87600 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDBXalQ2UlZGOGJxSxoCamQaAmly/streams/CAQaAmpkGgJpciCFmJvyAygC': offset 87600 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 18, 2022 12:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-18T00:47:19.405Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDBXalQ2UlZGOGJxSxoCamQaAmly/streams/CAcaAmpkGgJpciDfq5D1BygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDBXalQ2UlZGOGJxSxoCamQaAmly/streams/CAcaAmpkGgJpciDfq5D1BygC': offset 126271 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDBXalQ2UlZGOGJxSxoCamQaAmly/streams/CAcaAmpkGgJpciDfq5D1BygC': offset 126271 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 18, 2022 12:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:47:21.258Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 18, 2022 12:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:47:21.427Z: Cleaning up.
    Feb 18, 2022 12:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:47:21.525Z: Stopping worker pool...
    Feb 18, 2022 12:49:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:49:44.026Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 18, 2022 12:49:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-18T00:49:44.081Z: Worker pool stopped.
    Feb 18, 2022 12:49:49 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-17_16_45_27-7594338871772142847 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1c528837-91c0-46ce-8d54-a586c897f594 and timestamp: 2022-02-18T00:49:49.595000000Z:
                     Metric:                    Value:
                   read_time                      9.22
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 18, 2022 12:49:49 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 41.699 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 25s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/m6cm7iyu2qxuq

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3044

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3044/display/redirect?page=changes>

Changes:

[Kenneth Knowles] Disable AfterSynchronizedProcessingTime test on Dataflow

[noreply] Case studies page improvements (#16702)


------------------------------------------
[...truncated 351.91 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 400d876274ae04ca58a54cbe9c99d58a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 17, 2022 7:48:59 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 17, 2022 7:49:00 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 17, 2022 7:49:01 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 17, 2022 7:49:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 17, 2022 7:49:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 7:49:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 17, 2022 7:49:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 17, 2022 7:49:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 7:49:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 17, 2022 7:49:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 17, 2022 7:49:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 17, 2022 7:49:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 7:49:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 17, 2022 7:49:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 17, 2022 7:49:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 7:49:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 17, 2022 7:49:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@635097653]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 17, 2022 7:49:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 17, 2022 7:49:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 7:49:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 17, 2022 7:49:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 17, 2022 7:49:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 7:49:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 17, 2022 7:49:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 17, 2022 7:49:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 17, 2022 7:49:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 17, 2022 7:49:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 17, 2022 7:49:13 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 17, 2022 7:49:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3205813126226258764.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-wzCUr6us2ttYtWazaj1SsZek_nipl9xUo_y5eqI_cQM.jar
    Feb 17, 2022 7:49:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 17, 2022 7:49:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 17, 2022 7:49:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash eeafa66d476c584a9136d1ebebf2d8245e5c7a1bc9d99ac844bfb96f7ffb3652> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-7q-mbUdsWEqRNtHr6_LYJF5cehvJ2ZrIRL-5b3_7NlI.pb
    Feb 17, 2022 7:49:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 17, 2022 7:49:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 17, 2022 7:49:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 17, 2022 7:49:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 17, 2022 7:49:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-17_11_49_17-14083408455997185391?project=apache-beam-testing
    Feb 17, 2022 7:49:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-17_11_49_17-14083408455997185391
    Feb 17, 2022 7:49:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-17_11_49_17-14083408455997185391
    Feb 17, 2022 7:49:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-17T19:49:19.603Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 17, 2022 7:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:49:29.342Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 17, 2022 7:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:49:30.106Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 17, 2022 7:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:49:30.178Z: Expanding GroupByKey operations into optimizable parts.
    Feb 17, 2022 7:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:49:30.220Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 17, 2022 7:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:49:30.296Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 17, 2022 7:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:49:30.335Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 17, 2022 7:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:49:30.388Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 17, 2022 7:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:49:30.758Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 17, 2022 7:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:49:30.841Z: Starting 5 workers in us-central1-a...
    Feb 17, 2022 7:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:49:55.181Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 17, 2022 7:50:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:50:19.482Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 17, 2022 7:50:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:50:43.273Z: Workers have started successfully.
    Feb 17, 2022 7:50:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:50:43.301Z: Workers have started successfully.
    Feb 17, 2022 7:51:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:51:13.652Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 17, 2022 7:51:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:51:13.842Z: Cleaning up.
    Feb 17, 2022 7:51:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:51:13.924Z: Stopping worker pool...
    Feb 17, 2022 7:53:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:53:42.409Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 17, 2022 7:53:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T19:53:42.449Z: Worker pool stopped.
    Feb 17, 2022 7:53:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-17_11_49_17-14083408455997185391 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f50bb3e1-ef85-44ed-baa6-4541013dc2a3 and timestamp: 2022-02-17T19:53:47.639000000Z:
                     Metric:                    Value:
                   read_time                     7.628
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 17, 2022 7:53:47 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 52.267 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 1s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/xnvejv54m7are

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3043

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3043/display/redirect>

Changes:


------------------------------------------
[...truncated 346.06 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 17, 2022 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 17, 2022 12:45:06 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 17, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 17, 2022 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 17, 2022 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 17, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 17, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 17, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 17, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 17, 2022 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 17, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 17, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 17, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8132456804210597011.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ZN7dFwDsiprdRJZIZjjBjCVX8yzokq1N7vxKQZTsM7Q.jar
    Feb 17, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 1 seconds
    Feb 17, 2022 12:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 17, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 7c825d0f32faaba9b57df46362f0e4f3e32a38eb50d5f77e9ad53cfc52d23b47> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-fIJdDzL6q6m1ffRjYvDk8-MqOOtQ1fd-mtU8_FLSO0c.pb
    Feb 17, 2022 12:45:23 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 17, 2022 12:45:24 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 17, 2022 12:45:24 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 17, 2022 12:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 17, 2022 12:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-17_04_45_24-12996873121497289090?project=apache-beam-testing
    Feb 17, 2022 12:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-17_04_45_24-12996873121497289090
    Feb 17, 2022 12:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-17_04_45_24-12996873121497289090
    Feb 17, 2022 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-17T12:45:26.558Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 17, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:45:34.378Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 17, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:45:35.003Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 17, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:45:35.055Z: Expanding GroupByKey operations into optimizable parts.
    Feb 17, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:45:35.082Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 17, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:45:35.265Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 17, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:45:35.322Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 17, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:45:35.361Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 17, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:45:35.662Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 17, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:45:35.729Z: Starting 5 workers in us-central1-f...
    Feb 17, 2022 12:45:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:45:58.230Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 17, 2022 12:46:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:46:17.703Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 17, 2022 12:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:46:41.495Z: Workers have started successfully.
    Feb 17, 2022 12:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:46:41.525Z: Workers have started successfully.
    Feb 17, 2022 12:47:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:47:16.498Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 17, 2022 12:47:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:47:16.644Z: Cleaning up.
    Feb 17, 2022 12:47:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:47:16.722Z: Stopping worker pool...
    Feb 17, 2022 12:49:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:49:37.829Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 17, 2022 12:49:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T12:49:38.259Z: Worker pool stopped.
    Feb 17, 2022 12:49:45 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-17_04_45_24-12996873121497289090 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 60d89b21-c0be-4a44-9bbb-d1c4cd9dfb7f and timestamp: 2022-02-17T12:49:45.198000000Z:
                     Metric:                    Value:
                   read_time                      8.13
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 17, 2022 12:49:45 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 44.552 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/3kjud5b2qvrns

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3042

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3042/display/redirect?page=changes>

Changes:

[relax] Fix TableRow conversion for the case of fields named "f"


------------------------------------------
[...truncated 355.17 KB...]
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 17, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 17, 2022 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 17, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 17, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 17, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8516642622176879390.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-j7aoNRbFtCK15jz5oN11sQ1adE9W_WbOIi05bHvDbf8.jar
    Feb 17, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 17, 2022 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 17, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 1bc3723483096ce1170571b8ba813b51fc1f9ffe80d3ef514e3d9713b68fdc5e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-G8NyNIMJbOEXBXG4uoE7Ufwfn_6A0-9RTj2XE7aP3F4.pb
    Feb 17, 2022 6:45:29 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 17, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 17, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 17, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 17, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-16_22_45_30-12520041553066869851?project=apache-beam-testing
    Feb 17, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-16_22_45_30-12520041553066869851
    Feb 17, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-16_22_45_30-12520041553066869851
    Feb 17, 2022 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-17T06:45:32.594Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 17, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:45:43.755Z: Worker configuration: e2-standard-2 in us-central1-b.
    Feb 17, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:45:44.343Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 17, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:45:44.380Z: Expanding GroupByKey operations into optimizable parts.
    Feb 17, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:45:44.405Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 17, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:45:44.478Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 17, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:45:44.513Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 17, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:45:44.547Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 17, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:45:44.858Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 17, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:45:44.933Z: Starting 5 workers in us-central1-b...
    Feb 17, 2022 6:46:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:45:59.363Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 17, 2022 6:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:46:30.721Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 17, 2022 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:47:00.124Z: Workers have started successfully.
    Feb 17, 2022 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:47:00.149Z: Workers have started successfully.
    Feb 17, 2022 6:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-17T06:47:30.198Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDC1xYkhVMEJGOTNoNxoCamQaAmly/streams/CAcaAmpkGgJpciCEzKmgBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1xYkhVMEJGOTNoNxoCamQaAmly/streams/CAcaAmpkGgJpciCEzKmgBSgC': offset 108503 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1xYkhVMEJGOTNoNxoCamQaAmly/streams/CAcaAmpkGgJpciCEzKmgBSgC': offset 108503 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 17, 2022 6:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:47:32.249Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 17, 2022 6:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:47:32.381Z: Cleaning up.
    Feb 17, 2022 6:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:47:32.452Z: Stopping worker pool...
    Feb 17, 2022 6:50:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:50:07.469Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 17, 2022 6:50:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T06:50:07.521Z: Worker pool stopped.
    Feb 17, 2022 6:50:18 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-16_22_45_30-12520041553066869851 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 61a039ba-0310-4ed8-a6e7-58a100e1b2c9 and timestamp: 2022-02-17T06:50:18.631000000Z:
                     Metric:                    Value:
                   read_time                     9.488
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 17, 2022 6:50:18 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 5 mins 8.038 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 52s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/kln63xj24vsq2

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3041

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3041/display/redirect?page=changes>

Changes:

[Kyle Weaver] [BEAM-13106] Support Flink 1.14.

[Kyle Weaver] [BEAM-13106] Reuse executor instead of shutting it down mid-test.

[Kyle Weaver] [BEAM-13106] Prevent infinite wait in Flink savepoint test.

[Kyle Weaver] [BEAM-13106] A couple additional fixes to FlinkSavepointTest.

[noreply] Bump dataflow.fnapi_container_version (#16874)


------------------------------------------
[...truncated 368.68 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-16_17_00_30-14944507678593221329
    Feb 17, 2022 1:00:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-17T01:00:34.270Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 17, 2022 1:00:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:00:41.031Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 17, 2022 1:00:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:00:41.637Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 17, 2022 1:00:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:00:41.664Z: Expanding GroupByKey operations into optimizable parts.
    Feb 17, 2022 1:00:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:00:41.679Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 17, 2022 1:00:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:00:41.748Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 17, 2022 1:00:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:00:41.763Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 17, 2022 1:00:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:00:41.793Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 17, 2022 1:00:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:00:42.093Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 17, 2022 1:00:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:00:42.184Z: Starting 5 workers in us-central1-c...
    Feb 17, 2022 1:00:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:00:49.273Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 17, 2022 1:01:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:01:22.172Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 17, 2022 1:01:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:01:46.332Z: Workers have started successfully.
    Feb 17, 2022 1:01:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:01:46.363Z: Workers have started successfully.
    Feb 17, 2022 1:02:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-17T01:02:14.002Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDlyMDZjbHN6dGdrZhoCamQaAmly/streams/CAkaAmpkGgJpciC-rcnnASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDlyMDZjbHN6dGdrZhoCamQaAmly/streams/CAkaAmpkGgJpciC-rcnnASgC': offset 116169 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDlyMDZjbHN6dGdrZhoCamQaAmly/streams/CAkaAmpkGgJpciC-rcnnASgC': offset 116169 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 17, 2022 1:02:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-17T01:02:14.999Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDlyMDZjbHN6dGdrZhoCamQaAmly/streams/CAcaAmpkGgJpciC_y4V7KAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDlyMDZjbHN6dGdrZhoCamQaAmly/streams/CAcaAmpkGgJpciC_y4V7KAI': offset 121672 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDlyMDZjbHN6dGdrZhoCamQaAmly/streams/CAcaAmpkGgJpciC_y4V7KAI': offset 121672 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 17, 2022 1:02:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-17T01:02:16.051Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDlyMDZjbHN6dGdrZhoCamQaAmly/streams/CAMaAmpkGgJpciCmsZHABigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDlyMDZjbHN6dGdrZhoCamQaAmly/streams/CAMaAmpkGgJpciCmsZHABigC': offset 74851 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDlyMDZjbHN6dGdrZhoCamQaAmly/streams/CAMaAmpkGgJpciCmsZHABigC': offset 74851 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 17, 2022 1:02:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:02:18.430Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 17, 2022 1:02:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:02:18.584Z: Cleaning up.
    Feb 17, 2022 1:02:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:02:18.651Z: Stopping worker pool...
    Feb 17, 2022 1:04:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:04:45.437Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 17, 2022 1:04:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-17T01:04:45.496Z: Worker pool stopped.
    Feb 17, 2022 1:04:53 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-16_17_00_30-14944507678593221329 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7e9edb02-c59a-44a0-a716-953471a79bba and timestamp: 2022-02-17T01:04:53.221000000Z:
                     Metric:                    Value:
                   read_time                    10.209
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 17, 2022 1:04:53 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 48.453 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 2s
165 actionable tasks: 106 executed, 57 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/qq3es7koci6ks

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3040

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3040/display/redirect?page=changes>

Changes:

[Jeff Tapper] Update Java LTS roadmap info on website for Java 17

[Ismaël Mejía] [BEAM-13202] Fix typos on tests names for VarianceFnTest

[Ismaël Mejía] [BEAM-13202] Add Coder to CountIfFn.Accum

[Ismaël Mejía] [BEAM-13202] Reuse Count transform code since CountIf is a specific case

[Kenneth Knowles] Add test category UsesProcessingTimeTimers

[Kenneth Knowles] Label tests that need UsesProcessingTimeTimers

[Kenneth Knowles] Exclude UsesProcessingTimeTimers from SamzaRunner tests

[mmack] [adhoc] Migrate KinesisIOIT to use ITEnvironment for Localstack based IT

[noreply] [BEAM-13955] Fix pylint breakage from #16836 (#16867)


------------------------------------------
[...truncated 369.86 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-16_11_04_59-15758495856737030378
    Feb 16, 2022 7:05:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-16T19:05:01.290Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 16, 2022 7:05:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:05:07.761Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 16, 2022 7:05:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:05:08.455Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 16, 2022 7:05:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:05:08.495Z: Expanding GroupByKey operations into optimizable parts.
    Feb 16, 2022 7:05:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:05:08.522Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 16, 2022 7:05:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:05:08.599Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 16, 2022 7:05:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:05:08.633Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 16, 2022 7:05:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:05:08.666Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 16, 2022 7:05:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:05:09.061Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 16, 2022 7:05:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:05:09.142Z: Starting 5 workers in us-central1-c...
    Feb 16, 2022 7:05:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:05:37.688Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 16, 2022 7:05:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:05:54.569Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 16, 2022 7:06:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:06:17.433Z: Workers have started successfully.
    Feb 16, 2022 7:06:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:06:17.465Z: Workers have started successfully.
    Feb 16, 2022 7:06:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-16T19:06:45.199Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGQyY2pNeXRqVTdLThoCamQaAmly/streams/CAEaAmpkGgJpciDeqbeYAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGQyY2pNeXRqVTdLThoCamQaAmly/streams/CAEaAmpkGgJpciDeqbeYAigC': offset 111625 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGQyY2pNeXRqVTdLThoCamQaAmly/streams/CAEaAmpkGgJpciDeqbeYAigC': offset 111625 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 16, 2022 7:06:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-16T19:06:45.522Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGQyY2pNeXRqVTdLThoCamQaAmly/streams/CAkaAmpkGgJpciDahuJ4KAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGQyY2pNeXRqVTdLThoCamQaAmly/streams/CAkaAmpkGgJpciDahuJ4KAI': offset 99973 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGQyY2pNeXRqVTdLThoCamQaAmly/streams/CAkaAmpkGgJpciDahuJ4KAI': offset 99973 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 16, 2022 7:06:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-16T19:06:45.645Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGQyY2pNeXRqVTdLThoCamQaAmly/streams/CAQaAmpkGgJpciDFoLOpAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGQyY2pNeXRqVTdLThoCamQaAmly/streams/CAQaAmpkGgJpciDFoLOpAygC': offset 68595 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGQyY2pNeXRqVTdLThoCamQaAmly/streams/CAQaAmpkGgJpciDFoLOpAygC': offset 68595 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 16, 2022 7:06:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:06:50.061Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 16, 2022 7:06:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:06:50.203Z: Cleaning up.
    Feb 16, 2022 7:06:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:06:50.283Z: Stopping worker pool...
    Feb 16, 2022 7:09:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:09:17.205Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 16, 2022 7:09:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T19:09:17.271Z: Worker pool stopped.
    Feb 16, 2022 7:09:22 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-16_11_04_59-15758495856737030378 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 05213e74-abcf-4812-bb76-94902f9623af and timestamp: 2022-02-16T19:09:22.593000000Z:
                     Metric:                    Value:
                   read_time                    11.714
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 16, 2022 7:09:22 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 43.362 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 25s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ppacvjwr6j2wi

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3039

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3039/display/redirect?page=changes>

Changes:

[stranniknm] [BEAM-13785] playground - enable scio sdk

[noreply] Merge pull request #16753 from [BEAM-13837] [Playground] show graph on


------------------------------------------
[...truncated 348.44 KB...]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 16, 2022 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 16, 2022 12:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 16, 2022 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 16, 2022 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 16, 2022 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 16, 2022 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 16, 2022 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 16, 2022 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 16, 2022 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 16, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 16, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 16, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 16, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 16, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 16, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 16, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 16, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 16, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 16, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 16, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 16, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 16, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 16, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 16, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 16, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 16, 2022 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 16, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 16, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 16, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1727610602548651616.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-MYw1mEg2A-VMgmamMu0hKpVA0NM5IJjS8d-het3oOkM.jar
    Feb 16, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.14.1/85348a0c44c298bbec5ae747e67ae12e60b3aef6/libthrift-0.14.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/libthrift-0.14.1-WzUQ_nLm8HJeKc7269seq6zMxp15_E7VC2gWAKh2Z-w.jar
    Feb 16, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_28_0/0.2/b8ec320b972b575ab37767bf8d4cfadff1fe304a/beam-vendor-calcite-1_28_0-0.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_28_0-0.2-pvjNvR5NntriHz0ja9OKytRXl6tOlifYWKUI0wMGtVo.jar
    Feb 16, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/8.5.46/5d686394334d143f48251827435ab086a161e75e/tomcat-embed-core-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-embed-core-8.5.46-vl-FREjS7l1uADb-srT3ExYweaG2uaepdQjlWRetNcI.jar
    Feb 16, 2022 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat/tomcat-annotations-api/8.5.46/56c67699de192c603afd6f029e80e5ff8d98e7e9/tomcat-annotations-api-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-annotations-api-8.5.46-amtG0OaVhkRRTAyjZYs7B-YSOmgqIO4203lSQnNfq8M.jar
    Feb 16, 2022 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 362 files cached, 5 files newly uploaded in 1 seconds
    Feb 16, 2022 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 16, 2022 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 345a829caf89692d58c799df5c5ea7e16fd9ad7020b955c820084a9350a7f568> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-NFqCnK-JaS1Yx5nfXF6n4W_ZrXAguVXIIAhKk1Cn9Wg.pb
    Feb 16, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 16, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 16, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 16, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 16, 2022 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-16_04_45_09-14920341991937859630?project=apache-beam-testing
    Feb 16, 2022 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-16_04_45_09-14920341991937859630
    Feb 16, 2022 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-16_04_45_09-14920341991937859630
    Feb 16, 2022 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-16T12:45:11.063Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 16, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:45:19.958Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 16, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:45:20.541Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 16, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:45:20.568Z: Expanding GroupByKey operations into optimizable parts.
    Feb 16, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:45:20.595Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 16, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:45:20.675Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 16, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:45:20.702Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 16, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:45:20.733Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 16, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:45:21.140Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 16, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:45:21.203Z: Starting 5 workers in us-central1-a...
    Feb 16, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:45:35.624Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 16, 2022 12:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:46:05.519Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Feb 16, 2022 12:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:46:05.545Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Feb 16, 2022 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:46:31.621Z: Workers have started successfully.
    Feb 16, 2022 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:46:31.655Z: Workers have started successfully.
    Feb 16, 2022 12:46:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:46:46.789Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 16, 2022 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:47:03.956Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 16, 2022 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:47:04.227Z: Cleaning up.
    Feb 16, 2022 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:47:04.329Z: Stopping worker pool...
    Feb 16, 2022 12:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:49:29.291Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 16, 2022 12:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T12:49:29.333Z: Worker pool stopped.
    Feb 16, 2022 12:49:35 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-16_04_45_09-14920341991937859630 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 887c34f6-9f4f-4179-92b6-f0cba38944f1 and timestamp: 2022-02-16T12:49:35.538000000Z:
                     Metric:                    Value:
                   read_time                       7.3
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 16, 2022 12:49:35 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 48.509 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/zmz7pn4klbnro

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3038

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3038/display/redirect>

Changes:


------------------------------------------
[...truncated 359.67 KB...]
    Feb 16, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 16, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 16, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 16, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 16, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 16, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 16, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 16, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 16, 2022 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 16, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 16, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 16, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5153423187898400250.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Mq1kUuoDJ8ArVvifoZzSrr5P_Ag6HZB1RV06VivqRTM.jar
    Feb 16, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 16, 2022 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 16, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143941 bytes, hash e76fc42c1412c14fc61e08844adc185814112208057f3d841b3cdb59fa6cccf9> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-52_ELBQSwU_GHgiEStwYWBQRIggFfz2EGzzbWfpszPk.pb
    Feb 16, 2022 6:45:24 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 16, 2022 6:45:24 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 16, 2022 6:45:24 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 16, 2022 6:45:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 16, 2022 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-15_22_45_24-3579044423134093876?project=apache-beam-testing
    Feb 16, 2022 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-15_22_45_24-3579044423134093876
    Feb 16, 2022 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-15_22_45_24-3579044423134093876
    Feb 16, 2022 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-16T06:45:26.930Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 16, 2022 6:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:45:36.976Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 16, 2022 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:45:37.506Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 16, 2022 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:45:37.569Z: Expanding GroupByKey operations into optimizable parts.
    Feb 16, 2022 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:45:37.606Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 16, 2022 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:45:37.678Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 16, 2022 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:45:37.703Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 16, 2022 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:45:37.747Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 16, 2022 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:45:38.108Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 16, 2022 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:45:38.197Z: Starting 5 workers in us-central1-f...
    Feb 16, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:45:57.754Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 16, 2022 6:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:46:21.474Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 16, 2022 6:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:46:46.521Z: Workers have started successfully.
    Feb 16, 2022 6:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:46:46.551Z: Workers have started successfully.
    Feb 16, 2022 6:47:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-16T06:47:16.701Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEN1VzVoT250MEw4UxoCamQaAmly/streams/CAYaAmpkGgJpciDy5qbYBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEN1VzVoT250MEw4UxoCamQaAmly/streams/CAYaAmpkGgJpciDy5qbYBCgC': offset 95398 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEN1VzVoT250MEw4UxoCamQaAmly/streams/CAYaAmpkGgJpciDy5qbYBCgC': offset 95398 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 16, 2022 6:47:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-16T06:47:17.477Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEN1VzVoT250MEw4UxoCamQaAmly/streams/CAUaAmpkGgJpciDS4Ze6AigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEN1VzVoT250MEw4UxoCamQaAmly/streams/CAUaAmpkGgJpciDS4Ze6AigC': offset 81670 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEN1VzVoT250MEw4UxoCamQaAmly/streams/CAUaAmpkGgJpciDS4Ze6AigC': offset 81670 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 16, 2022 6:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:47:20.810Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 16, 2022 6:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:47:20.981Z: Cleaning up.
    Feb 16, 2022 6:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:47:21.069Z: Stopping worker pool...
    Feb 16, 2022 6:49:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:49:40.429Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 16, 2022 6:49:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T06:49:40.485Z: Worker pool stopped.
    Feb 16, 2022 6:49:47 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-15_22_45_24-3579044423134093876 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d19c834c-5c0e-4892-bb93-5501bb955adc and timestamp: 2022-02-16T06:49:47.100000000Z:
                     Metric:                    Value:
                   read_time                    10.545
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 16, 2022 6:49:47 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 43.288 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/apd2eljtbia5g

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3037

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3037/display/redirect?page=changes>

Changes:

[noreply] Seznam Case Study (#16825)

[noreply] [Website] Apache Hop Case Study (#16824)

[noreply] [BEAM-13694] Force hadoop-hdfs-client in hadoopVersion tests for hdfs

[noreply] [Website] Ricardo - added case study feedback (#16807)

[noreply] Merge pull request #16735 from [BEAM-13827] - fix medium file size


------------------------------------------
[...truncated 365.20 KB...]
    INFO: 2022-02-16T00:54:45.803Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 16, 2022 12:54:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:54:46.529Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 16, 2022 12:54:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:54:46.563Z: Expanding GroupByKey operations into optimizable parts.
    Feb 16, 2022 12:54:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:54:46.589Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 16, 2022 12:54:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:54:46.649Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 16, 2022 12:54:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:54:46.673Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 16, 2022 12:54:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:54:46.700Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 16, 2022 12:54:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:54:47.178Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 16, 2022 12:54:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:54:47.275Z: Starting 5 workers in us-central1-c...
    Feb 16, 2022 12:55:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:54:58.965Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 16, 2022 12:55:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:55:17.342Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Feb 16, 2022 12:55:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:55:17.377Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Feb 16, 2022 12:55:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:55:27.666Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 16, 2022 12:55:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:55:51.060Z: Workers have started successfully.
    Feb 16, 2022 12:55:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:55:51.101Z: Workers have started successfully.
    Feb 16, 2022 12:56:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-16T00:56:21.788Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFE0bXdfbnJ0VTc4cxoCamQaAmly/streams/CAQaAmpkGgJpciCG5b6jBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFE0bXdfbnJ0VTc4cxoCamQaAmly/streams/CAQaAmpkGgJpciCG5b6jBygC': offset 88698 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFE0bXdfbnJ0VTc4cxoCamQaAmly/streams/CAQaAmpkGgJpciCG5b6jBygC': offset 88698 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 16, 2022 12:56:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-16T00:56:22.134Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFE0bXdfbnJ0VTc4cxoCamQaAmly/streams/CAkaAmpkGgJpciDqgq2bBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFE0bXdfbnJ0VTc4cxoCamQaAmly/streams/CAkaAmpkGgJpciDqgq2bBCgC': offset 96396 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFE0bXdfbnJ0VTc4cxoCamQaAmly/streams/CAkaAmpkGgJpciDqgq2bBCgC': offset 96396 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 16, 2022 12:56:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-16T00:56:22.772Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFE0bXdfbnJ0VTc4cxoCamQaAmly/streams/CAMaAmpkGgJpciCiya_NBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFE0bXdfbnJ0VTc4cxoCamQaAmly/streams/CAMaAmpkGgJpciCiya_NBSgC': offset 88980 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFE0bXdfbnJ0VTc4cxoCamQaAmly/streams/CAMaAmpkGgJpciCiya_NBSgC': offset 88980 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 16, 2022 12:56:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:56:27.191Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 16, 2022 12:56:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:56:27.387Z: Cleaning up.
    Feb 16, 2022 12:56:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:56:27.514Z: Stopping worker pool...
    Feb 16, 2022 12:59:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:59:09.313Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 16, 2022 12:59:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-16T00:59:09.359Z: Worker pool stopped.
    Feb 16, 2022 12:59:16 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-15_16_54_34-8761878774188779281 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8fbeeaf8-42de-41e3-8eaf-ec1e0b012ed9 and timestamp: 2022-02-16T00:59:16.745000000Z:
                     Metric:                    Value:
                   read_time                    11.796
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 16, 2022 12:59:16 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 8 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.003 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.016 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 5 mins 34.709 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 56s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/5fxt6hgwbf4yu

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3036

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3036/display/redirect?page=changes>

Changes:

[mattcasters] [BEAM-13854] Document casting trick for Avro value serializer in KafkaIO

[noreply] Merge pull request #16838 from [BEAM-13931] - make sure large rows cause


------------------------------------------
[...truncated 377.56 KB...]
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHNNd0N5RzJvcUZsXxoCamQaAmly/streams/CAIaAmpkGgJpciCltYTAASgC': offset 78202 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 15, 2022 6:48:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-15T18:48:06.050Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHNNd0N5RzJvcUZsXxoCamQaAmly/streams/CAYaAmpkGgJpciCP-_eJBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHNNd0N5RzJvcUZsXxoCamQaAmly/streams/CAYaAmpkGgJpciCP-_eJBigC': offset 78405 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHNNd0N5RzJvcUZsXxoCamQaAmly/streams/CAYaAmpkGgJpciCP-_eJBigC': offset 78405 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 15, 2022 6:48:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-15T18:48:06.052Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHNNd0N5RzJvcUZsXxoCamQaAmly/streams/CAEaAmpkGgJpciC0revyBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHNNd0N5RzJvcUZsXxoCamQaAmly/streams/CAEaAmpkGgJpciC0revyBygC': offset 93093 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHNNd0N5RzJvcUZsXxoCamQaAmly/streams/CAEaAmpkGgJpciC0revyBygC': offset 93093 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 15, 2022 6:48:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-15T18:48:06.240Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHNNd0N5RzJvcUZsXxoCamQaAmly/streams/CAUaAmpkGgJpciCyyaWSBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHNNd0N5RzJvcUZsXxoCamQaAmly/streams/CAUaAmpkGgJpciCyyaWSBigC': offset 119996 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHNNd0N5RzJvcUZsXxoCamQaAmly/streams/CAUaAmpkGgJpciCyyaWSBigC': offset 119996 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 15, 2022 6:48:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T18:48:11.379Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 15, 2022 6:48:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T18:48:11.578Z: Cleaning up.
    Feb 15, 2022 6:48:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T18:48:11.664Z: Stopping worker pool...
    Feb 15, 2022 6:50:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T18:50:37.575Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 15, 2022 6:50:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T18:50:37.628Z: Worker pool stopped.
    Feb 15, 2022 6:50:44 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-15_10_46_08-4032997845310748723 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3800788b-2ef2-4999-8c56-0803efe53419 and timestamp: 2022-02-15T18:50:44.317000000Z:
                     Metric:                    Value:
                   read_time                    12.698
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 15, 2022 6:50:44 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.01 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.009 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 56.69 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 24s
165 actionable tasks: 104 executed, 59 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/gvxczo4bx2cy6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3035

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3035/display/redirect>

Changes:


------------------------------------------
[...truncated 358.18 KB...]
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 15, 2022 12:45:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 15, 2022 12:45:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 12:45:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 15, 2022 12:45:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 15, 2022 12:45:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 12:45:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 15, 2022 12:45:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 15, 2022 12:45:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 15, 2022 12:45:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 12:45:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 15, 2022 12:45:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 15, 2022 12:45:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 12:45:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 15, 2022 12:45:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 15, 2022 12:45:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 15, 2022 12:45:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 15, 2022 12:46:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 15, 2022 12:46:07 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 15, 2022 12:46:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test255502477873127324.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-3MhANPq8Bfqw5UfoeMBdfFWjrSEpS20QN7PSyVUNSs0.jar
    Feb 15, 2022 12:46:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 15, 2022 12:46:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 15, 2022 12:46:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 694be1f5839fc83b0450cfd58130376c711a7711337b0406d05ca4f3750e7d20> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-aUvh9YOfyDsEUM_VgTA3bHEadxEzewQG0Fyk83UOfSA.pb
    Feb 15, 2022 12:46:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 15, 2022 12:46:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 15, 2022 12:46:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 15, 2022 12:46:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 15, 2022 12:46:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-15_04_46_11-15344545394855345531?project=apache-beam-testing
    Feb 15, 2022 12:46:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-15_04_46_11-15344545394855345531
    Feb 15, 2022 12:46:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-15_04_46_11-15344545394855345531
    Feb 15, 2022 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-15T12:46:12.991Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 15, 2022 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:46:21.216Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 15, 2022 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:46:22.080Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 15, 2022 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:46:22.121Z: Expanding GroupByKey operations into optimizable parts.
    Feb 15, 2022 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:46:22.145Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 15, 2022 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:46:22.212Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 15, 2022 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:46:22.263Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 15, 2022 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:46:22.298Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 15, 2022 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:46:22.664Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 15, 2022 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:46:22.742Z: Starting 5 workers in us-central1-f...
    Feb 15, 2022 12:46:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:46:43.302Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 15, 2022 12:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:47:12.022Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 15, 2022 12:47:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:47:37.301Z: Workers have started successfully.
    Feb 15, 2022 12:47:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:47:37.384Z: Workers have started successfully.
    Feb 15, 2022 12:48:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-15T12:48:09.534Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGN4dkxiV2dsU2J3SRoCamQaAmly/streams/GgJqZBoCaXIg98q2lAMoAg"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGN4dkxiV2dsU2J3SRoCamQaAmly/streams/GgJqZBoCaXIg98q2lAMoAg': offset 77680 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGN4dkxiV2dsU2J3SRoCamQaAmly/streams/GgJqZBoCaXIg98q2lAMoAg': offset 77680 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 15, 2022 12:48:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:48:13.163Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 15, 2022 12:48:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:48:13.300Z: Cleaning up.
    Feb 15, 2022 12:48:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:48:13.379Z: Stopping worker pool...
    Feb 15, 2022 12:50:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:50:40.981Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 15, 2022 12:50:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T12:50:41.045Z: Worker pool stopped.
    Feb 15, 2022 12:50:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-15_04_46_11-15344545394855345531 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5ef4be25-a03c-4c22-92e5-1d32bc49b708 and timestamp: 2022-02-15T12:50:47.622000000Z:
                     Metric:                    Value:
                   read_time                    10.447
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 15, 2022 12:50:47 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 5 mins 3.118 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 25s
165 actionable tasks: 105 executed, 58 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/h6zqfmbdvfckm

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3034

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3034/display/redirect?page=changes>

Changes:

[Kenneth Knowles] Add test for processing time continuation trigger


------------------------------------------
[...truncated 352.66 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is d749b137071d725a5d27e564789cf238
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 15, 2022 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 15, 2022 6:45:19 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 15, 2022 6:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 15, 2022 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 15, 2022 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 15, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 15, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 15, 2022 6:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 15, 2022 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 15, 2022 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 6:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 15, 2022 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 15, 2022 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 6:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 15, 2022 6:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@635097653]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 15, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 15, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 15, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 15, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 15, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 15, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 15, 2022 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 15, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 15, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 15, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7961702281861060651.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-mIr35fjh3ZKtjNrk1GYk5VTDhnGyTktNwsIimn6yx7c.jar
    Feb 15, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 15, 2022 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 15, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143942 bytes, hash 830bb510dfb922a884468684de5885fb058d88ef7ac3bbf3ce04ab5673720ba7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-gwu1EN-5IqiERoaE3liF-wWNiO96w7vzzgSrVnNyC6c.pb
    Feb 15, 2022 6:45:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 15, 2022 6:45:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 15, 2022 6:45:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 15, 2022 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 15, 2022 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-14_22_45_35-294749281135090415?project=apache-beam-testing
    Feb 15, 2022 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-14_22_45_35-294749281135090415
    Feb 15, 2022 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-14_22_45_35-294749281135090415
    Feb 15, 2022 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-15T06:45:37.538Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 15, 2022 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:45:52.420Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 15, 2022 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:45:53.052Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 15, 2022 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:45:53.083Z: Expanding GroupByKey operations into optimizable parts.
    Feb 15, 2022 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:45:53.120Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 15, 2022 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:45:53.177Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 15, 2022 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:45:53.198Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 15, 2022 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:45:53.224Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 15, 2022 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:45:53.553Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 15, 2022 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:45:53.615Z: Starting 5 workers in us-central1-c...
    Feb 15, 2022 6:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:46:07.299Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 15, 2022 6:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:46:33.719Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 15, 2022 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:47:00.167Z: Workers have started successfully.
    Feb 15, 2022 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:47:00.194Z: Workers have started successfully.
    Feb 15, 2022 6:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:47:30.744Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 15, 2022 6:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:47:30.882Z: Cleaning up.
    Feb 15, 2022 6:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:47:30.961Z: Stopping worker pool...
    Feb 15, 2022 6:49:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:49:57.987Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 15, 2022 6:49:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T06:49:58.033Z: Worker pool stopped.
    Feb 15, 2022 6:50:04 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-14_22_45_35-294749281135090415 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 28ec3099-55d1-4396-b9b5-0bad96ef4f49 and timestamp: 2022-02-15T06:50:04.269000000Z:
                     Metric:                    Value:
                   read_time                     7.895
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 15, 2022 6:50:04 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 48.87 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 43s
165 actionable tasks: 106 executed, 57 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ffnkgi6c7b4kq

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3033

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3033/display/redirect?page=changes>

Changes:

[Kyle Weaver] [BEAM-12712] Spark: Exclude looping timer tests.

[Kyle Weaver] [BEAM-13919] Annotate PerKeyOrderingTest with UsesStatefulParDo.

[noreply] [BEAM-13894] Unit test utilities in the ptest package (#16830)

[noreply] [BEAM-13922] [Coverage] Make boot.go more testable and add tests

[noreply] Exclude SpannerChangeStream IT from Dataflow V1 postcommit (#16851)

[noreply] [BEAM-13930] Address StateSpec consistency issue between Runner and Fn


------------------------------------------
[...truncated 352.56 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 36c84499ec8f5524e6d969d384b7774d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 15, 2022 1:18:41 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 15, 2022 1:18:42 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 15, 2022 1:18:43 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 15, 2022 1:18:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 15, 2022 1:18:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 1:18:46 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 15, 2022 1:18:46 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 15, 2022 1:18:46 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 1:18:46 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 15, 2022 1:18:47 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 15, 2022 1:18:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 15, 2022 1:18:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 1:18:47 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 15, 2022 1:18:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 15, 2022 1:18:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 1:18:47 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 15, 2022 1:18:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 15, 2022 1:18:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 15, 2022 1:18:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 1:18:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 15, 2022 1:18:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 15, 2022 1:18:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 15, 2022 1:18:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 15, 2022 1:18:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 15, 2022 1:18:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 15, 2022 1:18:49 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 15, 2022 1:18:53 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 15, 2022 1:18:53 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-QuGnur7irnXUBunkr5-Fg8-cA-O2Pbxk01JwQWgcxT4.jar
    Feb 15, 2022 1:18:53 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1839992188376700421.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-UegDbh3ll-NTtMYrlYXswvgCPYgNJEwJF7GG1xuaAPQ.jar
    Feb 15, 2022 1:18:53 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-NW0H_PrNLIVJt09w3Gd52J_pt5glcgqx7QhmmZqFVro.jar
    Feb 15, 2022 1:18:54 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 365 files cached, 2 files newly uploaded in 0 seconds
    Feb 15, 2022 1:18:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 15, 2022 1:18:54 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143945 bytes, hash 4e54f8e6797e822153bf3c9eb8f2594409d042353402d2ad42b85eab1465f78b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-TlT45nl-giFTvzyeuPJZRAnQQjU0AtKtQrheqxRl94s.pb
    Feb 15, 2022 1:18:57 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 15, 2022 1:18:57 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 15, 2022 1:18:57 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 15, 2022 1:18:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 15, 2022 1:18:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-14_17_18_57-232752117377724102?project=apache-beam-testing
    Feb 15, 2022 1:18:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-14_17_18_57-232752117377724102
    Feb 15, 2022 1:18:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-14_17_18_57-232752117377724102
    Feb 15, 2022 1:19:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-15T01:18:59.955Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 15, 2022 1:19:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:19:09.584Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 15, 2022 1:19:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:19:10.194Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 15, 2022 1:19:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:19:10.244Z: Expanding GroupByKey operations into optimizable parts.
    Feb 15, 2022 1:19:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:19:10.294Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 15, 2022 1:19:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:19:10.376Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 15, 2022 1:19:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:19:10.403Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 15, 2022 1:19:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:19:10.432Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 15, 2022 1:19:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:19:10.786Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 15, 2022 1:19:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:19:10.874Z: Starting 5 workers in us-central1-a...
    Feb 15, 2022 1:19:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:19:23.045Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 15, 2022 1:19:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:19:55.259Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 15, 2022 1:20:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:20:19.569Z: Workers have started successfully.
    Feb 15, 2022 1:20:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:20:19.601Z: Workers have started successfully.
    Feb 15, 2022 1:20:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:20:47.282Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 15, 2022 1:20:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:20:47.513Z: Cleaning up.
    Feb 15, 2022 1:20:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:20:47.586Z: Stopping worker pool...
    Feb 15, 2022 1:23:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:23:18.130Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 15, 2022 1:23:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-15T01:23:18.216Z: Worker pool stopped.
    Feb 15, 2022 1:23:25 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-14_17_18_57-232752117377724102 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 02bf3ddb-ee6c-41ae-95d2-89819983c7a4 and timestamp: 2022-02-15T01:23:25.807000000Z:
                     Metric:                    Value:
                   read_time                     5.707
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 15, 2022 1:23:25 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 47.928 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 40s
165 actionable tasks: 106 executed, 57 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/6ad5qpi6ht7gg

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3032

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3032/display/redirect?page=changes>

Changes:

[noreply] Update 2.36.0 blog post to mention ARM64 support

[noreply] Minor: Disable checker framework in nightly snapshot (#16829)

[artur.khanin] Updated example link

[noreply] [BEAM-13860] Make `DoFn.infer_output_type` return element type (#16788)


------------------------------------------
[...truncated 347.08 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c75c489ab52404ff3d760b5815e54229
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 15'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 15'
Successfully started process 'Gradle Test Executor 15'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 14, 2022 6:47:17 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 14, 2022 6:47:17 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 14, 2022 6:47:18 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 14, 2022 6:47:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 14, 2022 6:47:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 6:47:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 14, 2022 6:47:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 14, 2022 6:47:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 6:47:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 14, 2022 6:47:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 14, 2022 6:47:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 14, 2022 6:47:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 6:47:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 14, 2022 6:47:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 14, 2022 6:47:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 6:47:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 14, 2022 6:47:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 14, 2022 6:47:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 14, 2022 6:47:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 6:47:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 14, 2022 6:47:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 14, 2022 6:47:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 6:47:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 14, 2022 6:47:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 14, 2022 6:47:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 14, 2022 6:47:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 14, 2022 6:47:28 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 14, 2022 6:47:28 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-aiNHdtykAIX80LKys347oq_viw_HTVuLfbzd72BjsUc.jar
    Feb 14, 2022 6:47:28 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8546690948850375345.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-gibUafY1QASKB89LsvljtArbCXKNvoGkylNd-Hwsp34.jar
    Feb 14, 2022 6:47:29 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 14, 2022 6:47:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 14, 2022 6:47:29 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash e2827a2edb5311e79f0bcd4b630ade77bbb3bd34458ee331c50888bb0789e6c7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-4oJ6LttTEeefC81LYwred7uzvTRFjuMxxQiIuweJ5sc.pb
    Feb 14, 2022 6:47:32 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 14, 2022 6:47:32 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 14, 2022 6:47:32 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 14, 2022 6:47:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 14, 2022 6:47:33 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-14_10_47_32-15315377606969597937?project=apache-beam-testing
    Feb 14, 2022 6:47:33 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-14_10_47_32-15315377606969597937
    Feb 14, 2022 6:47:33 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-14_10_47_32-15315377606969597937
    Feb 14, 2022 6:47:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-14T18:47:34.886Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 14, 2022 6:47:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:47:43.195Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 14, 2022 6:47:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:47:43.931Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 14, 2022 6:47:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:47:43.971Z: Expanding GroupByKey operations into optimizable parts.
    Feb 14, 2022 6:47:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:47:44.016Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 14, 2022 6:47:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:47:44.078Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 14, 2022 6:47:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:47:44.101Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 14, 2022 6:47:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:47:44.126Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 14, 2022 6:47:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:47:44.426Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 14, 2022 6:47:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:47:44.504Z: Starting 5 workers in us-central1-a...
    Feb 14, 2022 6:48:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:48:00.873Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 14, 2022 6:48:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:48:23.956Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 14, 2022 6:48:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:48:47.815Z: Workers have started successfully.
    Feb 14, 2022 6:48:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:48:47.841Z: Workers have started successfully.
    Feb 14, 2022 6:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:49:16.120Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 14, 2022 6:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:49:16.262Z: Cleaning up.
    Feb 14, 2022 6:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:49:16.342Z: Stopping worker pool...
    Feb 14, 2022 6:51:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:51:38.411Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 14, 2022 6:51:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T18:51:38.453Z: Worker pool stopped.
    Feb 14, 2022 6:51:49 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-14_10_47_32-15315377606969597937 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2d8566ba-3d78-4e4f-9837-79e321fb75bb and timestamp: 2022-02-14T18:51:49.703000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     5.779

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 14, 2022 6:51:49 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 15 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.008 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.008 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 36.491 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 49s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/6tz3rssgks57s

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3031

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3031/display/redirect>

Changes:


------------------------------------------
[...truncated 346.27 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c75c489ab52404ff3d760b5815e54229
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 14, 2022 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 14, 2022 12:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 14, 2022 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 14, 2022 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 14, 2022 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 14, 2022 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 14, 2022 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 14, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 14, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 14, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 14, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 14, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 14, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 14, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 14, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 14, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 14, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 14, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 14, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 14, 2022 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 14, 2022 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 14, 2022 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-aiNHdtykAIX80LKys347oq_viw_HTVuLfbzd72BjsUc.jar
    Feb 14, 2022 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3961273267339180684.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-b9APZeF8c-WhIwU5hseXrRECFWUUVaajlIhhU_A-Kdc.jar
    Feb 14, 2022 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 1 seconds
    Feb 14, 2022 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 14, 2022 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 3370d4082979d27bbba5bf7fde47641282eaa33aa5811ecec08d6846bcc2b162> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-M3DUCCl50nu7pb9_3kdkEoLqozqlgR7OwI1oRrzCsWI.pb
    Feb 14, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 14, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 14, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 14, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 14, 2022 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-14_04_45_06-15325443906813856356?project=apache-beam-testing
    Feb 14, 2022 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-14_04_45_06-15325443906813856356
    Feb 14, 2022 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-14_04_45_06-15325443906813856356
    Feb 14, 2022 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-14T12:45:08.538Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 14, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:45:17.263Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 14, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:45:17.875Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 14, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:45:17.919Z: Expanding GroupByKey operations into optimizable parts.
    Feb 14, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:45:17.946Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 14, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:45:18.006Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 14, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:45:18.031Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 14, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:45:18.055Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 14, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:45:18.385Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 14, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:45:18.461Z: Starting 5 workers in us-central1-f...
    Feb 14, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:45:33.974Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 14, 2022 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:46:03.688Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 14, 2022 12:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:46:28.007Z: Workers have started successfully.
    Feb 14, 2022 12:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:46:28.059Z: Workers have started successfully.
    Feb 14, 2022 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:47:00.965Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 14, 2022 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:47:01.111Z: Cleaning up.
    Feb 14, 2022 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:47:01.189Z: Stopping worker pool...
    Feb 14, 2022 12:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:49:29.075Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 14, 2022 12:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T12:49:29.137Z: Worker pool stopped.
    Feb 14, 2022 12:49:35 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-14_04_45_06-15325443906813856356 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 139e5945-8dc9-43d4-a9d7-e2eb9e91623b and timestamp: 2022-02-14T12:49:35.116000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.032

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 14, 2022 12:49:35 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 48.139 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/tpmub37rxmfww

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3030

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3030/display/redirect?page=changes>

Changes:

[akustov] fix name project id from secreton scio deploy action

[alexander.zhuravlev] [BEAM-13775] Fixed bug with run button

[ihr] [BEAM-13836] Fix the answers placeholders locations in the Python katas

[noreply] Merge pull request #16703 from [BEAM-13804][Playground][Bugfix] Add

[noreply] Merge pull request #16611 from [BEAM-13712][Playground] Add graph for

[noreply] Merge pull request #16757 from [BEAM-13655] [Playground] Persist the


------------------------------------------
[...truncated 352.30 KB...]
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 14, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 14, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 14, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 14, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 14, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@635097653]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 14, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 14, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 14, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 14, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 14, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 14, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 14, 2022 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 14, 2022 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 14, 2022 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-aiNHdtykAIX80LKys347oq_viw_HTVuLfbzd72BjsUc.jar
    Feb 14, 2022 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3628349329063321626.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-1OzsDqM79RmfhxJiz3ndBhItRLmpxQcFWEwkrlK_99o.jar
    Feb 14, 2022 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-io/commons-io/2.6/815893df5f31da2ece4040fe0a12fd44b577afaf/commons-io-2.6.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-io-2.6--HfTBGYKwqFC84ZbrfyXHex-1zx0fH-NXS9ROcpzZRM.jar
    Feb 14, 2022 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-lang/commons-lang/2.6/ce1edb914c94ebc388f086c6827e8bdeec71ac2/commons-lang-2.6.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-lang-2.6-UPEbCfh3wpTVbyRGP0fSj5Kc9QRPZIZhwPDPuumi9Jw.jar
    Feb 14, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 364 files cached, 3 files newly uploaded in 1 seconds
    Feb 14, 2022 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 14, 2022 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 1df2491afe3f43431d3b025bf9cd23f9e47d3bd31ddc2fa54b5a121686e504ac> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-HfJJGv4_Q0MdOwJb-c0j-eR9O9Md3C-lS1oSFoblBKw.pb
    Feb 14, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 14, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 14, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 14, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 14, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-13_22_45_08-7242653148546339960?project=apache-beam-testing
    Feb 14, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-13_22_45_08-7242653148546339960
    Feb 14, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-13_22_45_08-7242653148546339960
    Feb 14, 2022 6:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-14T06:45:10.111Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 14, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:45:21.047Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:45:21.704Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:45:21.738Z: Expanding GroupByKey operations into optimizable parts.
    Feb 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:45:21.774Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:45:21.847Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:45:21.862Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:45:21.894Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:45:22.277Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:45:22.343Z: Starting 5 workers in us-central1-f...
    Feb 14, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:45:30.966Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 14, 2022 6:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:46:09.478Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 14, 2022 6:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:46:33.319Z: Workers have started successfully.
    Feb 14, 2022 6:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:46:33.349Z: Workers have started successfully.
    Feb 14, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-14T06:47:02.803Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDhBQ3VwMDJXdHljQxoCamQaAmly/streams/CAUaAmpkGgJpciCZr-jUBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhBQ3VwMDJXdHljQxoCamQaAmly/streams/CAUaAmpkGgJpciCZr-jUBSgC': offset 118337 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDhBQ3VwMDJXdHljQxoCamQaAmly/streams/CAUaAmpkGgJpciCZr-jUBSgC': offset 118337 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 14, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:47:05.484Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 14, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:47:05.640Z: Cleaning up.
    Feb 14, 2022 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:47:05.698Z: Stopping worker pool...
    Feb 14, 2022 6:49:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:49:38.503Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 14, 2022 6:49:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T06:49:38.552Z: Worker pool stopped.
    Feb 14, 2022 6:49:45 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-13_22_45_08-7242653148546339960 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 63859260-d680-4c16-8859-70ce81280cda and timestamp: 2022-02-14T06:49:45.589000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.869

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 14, 2022 6:49:45 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 58.607 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 26s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/glm223rnfrvmy

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3029

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3029/display/redirect>

Changes:


------------------------------------------
[...truncated 351.97 KB...]
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 14, 2022 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 14, 2022 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 12:44:53 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 14, 2022 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 14, 2022 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 12:44:53 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 14, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 14, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 14, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 14, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 14, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 14, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 14, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 14, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 14, 2022 12:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 14, 2022 12:44:58 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 14, 2022 12:44:58 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-aiNHdtykAIX80LKys347oq_viw_HTVuLfbzd72BjsUc.jar
    Feb 14, 2022 12:44:58 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6279556152053527703.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ieJhe2iezXAGPKwBRkYhYMQiUSJ4H0huoV0A6uLAQ9U.jar
    Feb 14, 2022 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 14, 2022 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 14, 2022 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 078a3abcd48f612ebd80758446be1ea9a34d41557544dc5ba3699c0ae56d0ad1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-B4o6vNSPYS69gHWERr4eqaNNQVV1RNxbo2mcCuVtCtE.pb
    Feb 14, 2022 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 14, 2022 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 14, 2022 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 14, 2022 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 14, 2022 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-13_16_45_03-15479753481268938046?project=apache-beam-testing
    Feb 14, 2022 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-13_16_45_03-15479753481268938046
    Feb 14, 2022 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-13_16_45_03-15479753481268938046
    Feb 14, 2022 12:45:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-14T00:45:05.190Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 14, 2022 12:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:45:13.442Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 14, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:45:14.392Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 14, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:45:14.431Z: Expanding GroupByKey operations into optimizable parts.
    Feb 14, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:45:14.449Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 14, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:45:14.531Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 14, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:45:14.561Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 14, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:45:14.593Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 14, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:45:14.917Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 14, 2022 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:45:14.983Z: Starting 5 workers in us-central1-a...
    Feb 14, 2022 12:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:45:19.746Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 14, 2022 12:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:45:56.672Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 14, 2022 12:46:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:46:21.566Z: Workers have started successfully.
    Feb 14, 2022 12:46:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:46:21.616Z: Workers have started successfully.
    Feb 14, 2022 12:46:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-14T00:46:49.353Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEQ4Zy1ScEZzdmpwSxoCamQaAmly/streams/CAMaAmpkGgJpciCh68zmBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEQ4Zy1ScEZzdmpwSxoCamQaAmly/streams/CAMaAmpkGgJpciCh68zmBSgC': offset 66470 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEQ4Zy1ScEZzdmpwSxoCamQaAmly/streams/CAMaAmpkGgJpciCh68zmBSgC': offset 66470 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 14, 2022 12:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:46:51.612Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 14, 2022 12:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:46:51.731Z: Cleaning up.
    Feb 14, 2022 12:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:46:51.810Z: Stopping worker pool...
    Feb 14, 2022 12:49:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:49:11.868Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 14, 2022 12:49:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-14T00:49:11.915Z: Worker pool stopped.
    Feb 14, 2022 12:49:18 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-13_16_45_03-15479753481268938046 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ec03d71f-68b3-49a7-a666-7e42626c1610 and timestamp: 2022-02-14T00:49:18.496000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.402

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 14, 2022 12:49:18 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 34.349 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 59s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/r4gkelng3qfr6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3028

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3028/display/redirect>

Changes:


------------------------------------------
[...truncated 349.04 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c75c489ab52404ff3d760b5815e54229
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 13, 2022 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 13, 2022 6:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 13, 2022 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 13, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 13, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 13, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 13, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 13, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 13, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 13, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 13, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 13, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 13, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 13, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 13, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 13, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 13, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 13, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 13, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 13, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 13, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 13, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 13, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 13, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 13, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 13, 2022 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 13, 2022 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 13, 2022 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-aiNHdtykAIX80LKys347oq_viw_HTVuLfbzd72BjsUc.jar
    Feb 13, 2022 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6186359063036510204.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-iotlhJ8TTG8o9raajF6v0JyeR86H2gi9NldVSnsuRz8.jar
    Feb 13, 2022 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 13, 2022 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 13, 2022 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash f274a800a68627c1c6227aa98c326ab3e04f3f6876459bdedae8f182b987585e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8nSoAKaGJ8HGInqpjDJqs-BPP2h2RZve2ujxgrmHWF4.pb
    Feb 13, 2022 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 13, 2022 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 13, 2022 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 13, 2022 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 13, 2022 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-13_10_45_11-18246000614424219582?project=apache-beam-testing
    Feb 13, 2022 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-13_10_45_11-18246000614424219582
    Feb 13, 2022 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-13_10_45_11-18246000614424219582
    Feb 13, 2022 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-13T18:45:16.836Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 13, 2022 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:45:24.993Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 13, 2022 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:45:25.586Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 13, 2022 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:45:25.612Z: Expanding GroupByKey operations into optimizable parts.
    Feb 13, 2022 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:45:25.641Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 13, 2022 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:45:25.701Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 13, 2022 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:45:25.737Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 13, 2022 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:45:25.763Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 13, 2022 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:45:26.095Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 13, 2022 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:45:26.172Z: Starting 5 workers in us-central1-a...
    Feb 13, 2022 6:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:45:46.087Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 13, 2022 6:46:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:46:06.471Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 13, 2022 6:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:46:30.481Z: Workers have started successfully.
    Feb 13, 2022 6:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:46:30.506Z: Workers have started successfully.
    Feb 13, 2022 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:46:57.581Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 13, 2022 6:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:46:57.710Z: Cleaning up.
    Feb 13, 2022 6:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:46:57.787Z: Stopping worker pool...
    Feb 13, 2022 6:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:49:20.756Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 13, 2022 6:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T18:49:20.802Z: Worker pool stopped.
    Feb 13, 2022 6:49:26 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-13_10_45_11-18246000614424219582 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9ae6f5c0-03d6-425f-a482-0f4e3511bd74 and timestamp: 2022-02-13T18:49:26.900000000Z:
                     Metric:                    Value:
                   read_time                     5.145
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 13, 2022 6:49:26 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 36.272 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/twwnbwtpjcwos

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3027

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3027/display/redirect>

Changes:


------------------------------------------
[...truncated 348.65 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c75c489ab52404ff3d760b5815e54229
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 13, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 13, 2022 12:45:07 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 13, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 13, 2022 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 13, 2022 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 13, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 13, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 13, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 13, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 13, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 13, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 13, 2022 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 13, 2022 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 13, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-aiNHdtykAIX80LKys347oq_viw_HTVuLfbzd72BjsUc.jar
    Feb 13, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3916292441914219631.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-YN0v32aKtfpQY0DC3Zf-0HykkyG5WvBemtWusGc6L7E.jar
    Feb 13, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 13, 2022 12:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 13, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143943 bytes, hash 1964bd12ac143244a580d229f80d00662436afb2a694a7845b84f359b84400b9> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-GWS9EqwUMkSlgNIp-A0AZiQ2r7KmlKeEW4TzWbhEALk.pb
    Feb 13, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 13, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 13, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 13, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 13, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-13_04_45_25-8506229996834720590?project=apache-beam-testing
    Feb 13, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-13_04_45_25-8506229996834720590
    Feb 13, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-13_04_45_25-8506229996834720590
    Feb 13, 2022 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-13T12:45:27.515Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 13, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:45:34.335Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 13, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:45:34.985Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 13, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:45:35.024Z: Expanding GroupByKey operations into optimizable parts.
    Feb 13, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:45:35.052Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 13, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:45:35.114Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 13, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:45:35.137Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 13, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:45:35.172Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 13, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:45:35.480Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 13, 2022 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:45:35.555Z: Starting 5 workers in us-central1-a...
    Feb 13, 2022 12:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:46:02.370Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 13, 2022 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:46:24.273Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 13, 2022 12:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:46:50.040Z: Workers have started successfully.
    Feb 13, 2022 12:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:46:50.069Z: Workers have started successfully.
    Feb 13, 2022 12:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:47:19.709Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 13, 2022 12:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:47:19.855Z: Cleaning up.
    Feb 13, 2022 12:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:47:19.931Z: Stopping worker pool...
    Feb 13, 2022 12:49:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:49:48.931Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 13, 2022 12:49:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T12:49:48.977Z: Worker pool stopped.
    Feb 13, 2022 12:49:54 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-13_04_45_25-8506229996834720590 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a64ee32a-005a-4f22-bee9-3f91498c5ac7 and timestamp: 2022-02-13T12:49:54.796000000Z:
                     Metric:                    Value:
                   read_time                      8.13
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 13, 2022 12:49:54 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 51.481 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 35s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/gqehuyevt2ay2

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3026

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3026/display/redirect>

Changes:


------------------------------------------
[...truncated 361.84 KB...]
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-12_22_45_03-2295573665848706507
    Feb 13, 2022 6:45:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-13T06:45:05.829Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 13, 2022 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:45:16.035Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 13, 2022 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:45:16.668Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 13, 2022 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:45:16.719Z: Expanding GroupByKey operations into optimizable parts.
    Feb 13, 2022 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:45:16.750Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 13, 2022 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:45:16.831Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 13, 2022 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:45:16.863Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 13, 2022 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:45:16.888Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 13, 2022 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:45:17.245Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 13, 2022 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:45:17.328Z: Starting 5 workers in us-central1-f...
    Feb 13, 2022 6:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:45:43.663Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 13, 2022 6:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:46:01.069Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 13, 2022 6:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:46:26.465Z: Workers have started successfully.
    Feb 13, 2022 6:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:46:26.486Z: Workers have started successfully.
    Feb 13, 2022 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-13T06:46:57.147Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGIzSUhlb211d0MyYhoCamQaAmly/streams/CAMaAmpkGgJpciCWweLRBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGIzSUhlb211d0MyYhoCamQaAmly/streams/CAMaAmpkGgJpciCWweLRBCgC': offset 76120 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGIzSUhlb211d0MyYhoCamQaAmly/streams/CAMaAmpkGgJpciCWweLRBCgC': offset 76120 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 13, 2022 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-13T06:46:57.157Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGIzSUhlb211d0MyYhoCamQaAmly/streams/CAEaAmpkGgJpciC_hYDbBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGIzSUhlb211d0MyYhoCamQaAmly/streams/CAEaAmpkGgJpciC_hYDbBCgC': offset 74983 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGIzSUhlb211d0MyYhoCamQaAmly/streams/CAEaAmpkGgJpciC_hYDbBCgC': offset 74983 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 13, 2022 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-13T06:46:57.805Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGIzSUhlb211d0MyYhoCamQaAmly/streams/CAgaAmpkGgJpciC2ssHRBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGIzSUhlb211d0MyYhoCamQaAmly/streams/CAgaAmpkGgJpciC2ssHRBSgC': offset 79913 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGIzSUhlb211d0MyYhoCamQaAmly/streams/CAgaAmpkGgJpciC2ssHRBSgC': offset 79913 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 13, 2022 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:47:00.609Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 13, 2022 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:47:00.772Z: Cleaning up.
    Feb 13, 2022 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:47:00.886Z: Stopping worker pool...
    Feb 13, 2022 6:49:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:49:29.344Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 13, 2022 6:49:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T06:49:29.395Z: Worker pool stopped.
    Feb 13, 2022 6:49:35 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-12_22_45_03-2295573665848706507 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3a74eb97-e48b-4bab-a9fa-d74de310a916 and timestamp: 2022-02-13T06:49:35.252000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.809

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 13, 2022 6:49:35 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 50.607 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 16s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/2bu2uwwsrxv5c

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3025

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3025/display/redirect>

Changes:


------------------------------------------
[...truncated 362.37 KB...]
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-12_16_45_47-15766628897449760995
    Feb 13, 2022 12:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-13T00:45:49.555Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 13, 2022 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:45:58.071Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 13, 2022 12:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:45:58.650Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 13, 2022 12:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:45:58.688Z: Expanding GroupByKey operations into optimizable parts.
    Feb 13, 2022 12:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:45:58.721Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 13, 2022 12:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:45:58.874Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 13, 2022 12:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:45:58.921Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 13, 2022 12:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:45:59.003Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 13, 2022 12:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:45:59.357Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 13, 2022 12:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:45:59.430Z: Starting 5 workers in us-central1-f...
    Feb 13, 2022 12:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:46:05.745Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 13, 2022 12:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:46:47.371Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 13, 2022 12:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:47:13.538Z: Workers have started successfully.
    Feb 13, 2022 12:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:47:13.569Z: Workers have started successfully.
    Feb 13, 2022 12:47:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-13T00:47:42.213Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEZGMnp1cm1rdmw3RRoCamQaAmly/streams/CAMaAmpkGgJpciCi0rb2AigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEZGMnp1cm1rdmw3RRoCamQaAmly/streams/CAMaAmpkGgJpciCi0rb2AigC': offset 84787 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEZGMnp1cm1rdmw3RRoCamQaAmly/streams/CAMaAmpkGgJpciCi0rb2AigC': offset 84787 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 13, 2022 12:47:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-13T00:47:44.193Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEZGMnp1cm1rdmw3RRoCamQaAmly/streams/GgJqZBoCaXIg0JzPuAEoAg"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEZGMnp1cm1rdmw3RRoCamQaAmly/streams/GgJqZBoCaXIg0JzPuAEoAg': offset 88386 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEZGMnp1cm1rdmw3RRoCamQaAmly/streams/GgJqZBoCaXIg0JzPuAEoAg': offset 88386 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 13, 2022 12:47:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-13T00:47:44.374Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEZGMnp1cm1rdmw3RRoCamQaAmly/streams/CAQaAmpkGgJpciCZ84SeAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEZGMnp1cm1rdmw3RRoCamQaAmly/streams/CAQaAmpkGgJpciCZ84SeAigC': offset 73834 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEZGMnp1cm1rdmw3RRoCamQaAmly/streams/CAQaAmpkGgJpciCZ84SeAigC': offset 73834 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 13, 2022 12:47:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:47:45.720Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 13, 2022 12:47:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:47:45.849Z: Cleaning up.
    Feb 13, 2022 12:47:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:47:45.919Z: Stopping worker pool...
    Feb 13, 2022 12:50:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:50:13.527Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 13, 2022 12:50:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-13T00:50:13.604Z: Worker pool stopped.
    Feb 13, 2022 12:50:19 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-12_16_45_47-15766628897449760995 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 504d14dc-15ff-4811-8363-671cd146f4f6 and timestamp: 2022-02-13T00:50:19.256000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.075

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 13, 2022 12:50:19 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 53.934 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 39s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/qanvgkv45v6gw

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3024

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3024/display/redirect?page=changes>

Changes:

[randomstep] [BEAM-9195] Bump org.testcontainers to 1.16.3


------------------------------------------
[...truncated 400.01 KB...]
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGlIbjZLdWZxN0NzehoCamQaAmly/streams/CAYaAmpkGgJpciCOmarDBigC': offset 72744 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 12, 2022 6:50:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-12T18:50:17.519Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGlIbjZLdWZxN0NzehoCamQaAmly/streams/CAIaAmpkGgJpciCakOzLAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGlIbjZLdWZxN0NzehoCamQaAmly/streams/CAIaAmpkGgJpciCakOzLAigC': offset 87770 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGlIbjZLdWZxN0NzehoCamQaAmly/streams/CAIaAmpkGgJpciCakOzLAigC': offset 87770 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 12, 2022 6:50:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-12T18:50:17.606Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGlIbjZLdWZxN0NzehoCamQaAmly/streams/CAgaAmpkGgJpciCljoyqBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGlIbjZLdWZxN0NzehoCamQaAmly/streams/CAgaAmpkGgJpciCljoyqBygC': offset 68761 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGlIbjZLdWZxN0NzehoCamQaAmly/streams/CAgaAmpkGgJpciCljoyqBygC': offset 68761 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 12, 2022 6:50:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-12T18:50:17.607Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGlIbjZLdWZxN0NzehoCamQaAmly/streams/CAcaAmpkGgJpciCH2tzyAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGlIbjZLdWZxN0NzehoCamQaAmly/streams/CAcaAmpkGgJpciCH2tzyAigC': offset 87356 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGlIbjZLdWZxN0NzehoCamQaAmly/streams/CAcaAmpkGgJpciCH2tzyAigC': offset 87356 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 12, 2022 6:50:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T18:50:21.580Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 12, 2022 6:50:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T18:50:21.776Z: Cleaning up.
    Feb 12, 2022 6:50:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T18:50:21.864Z: Stopping worker pool...
    Feb 12, 2022 6:52:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T18:52:49.458Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 12, 2022 6:52:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T18:52:49.505Z: Worker pool stopped.
    Feb 12, 2022 6:52:55 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-12_10_48_20-5988800567058707116 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 72b12845-96b0-4fca-a83e-f05cfcdb201c and timestamp: 2022-02-12T18:52:55.707000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.605

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 12, 2022 6:52:55 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 9 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 55.268 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 36s
165 actionable tasks: 122 executed, 41 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/giijz62bdp6xi

Stopped 8 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3023

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3023/display/redirect>

Changes:


------------------------------------------
[...truncated 361.47 KB...]
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-12_04_45_16-18380373121823829936
    Feb 12, 2022 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-12T12:45:19.994Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 12, 2022 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:45:33.247Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 12, 2022 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:45:34.207Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 12, 2022 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:45:34.256Z: Expanding GroupByKey operations into optimizable parts.
    Feb 12, 2022 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:45:34.297Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 12, 2022 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:45:34.358Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 12, 2022 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:45:34.388Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 12, 2022 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:45:34.417Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 12, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:45:34.786Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 12, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:45:34.886Z: Starting 5 workers in us-central1-f...
    Feb 12, 2022 12:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:46:00.699Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 12, 2022 12:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:46:20.237Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 12, 2022 12:46:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:46:44.933Z: Workers have started successfully.
    Feb 12, 2022 12:46:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:46:44.964Z: Workers have started successfully.
    Feb 12, 2022 12:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-12T12:47:13.650Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFI2ZDBIUXNwaS1GVhoCamQaAmly/streams/CAEaAmpkGgJpciDmuM-NAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFI2ZDBIUXNwaS1GVhoCamQaAmly/streams/CAEaAmpkGgJpciDmuM-NAigC': offset 110368 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFI2ZDBIUXNwaS1GVhoCamQaAmly/streams/CAEaAmpkGgJpciDmuM-NAigC': offset 110368 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 12, 2022 12:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-12T12:47:13.888Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFI2ZDBIUXNwaS1GVhoCamQaAmly/streams/CAQaAmpkGgJpciCK5Lr7BCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFI2ZDBIUXNwaS1GVhoCamQaAmly/streams/CAQaAmpkGgJpciCK5Lr7BCgC': offset 121334 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFI2ZDBIUXNwaS1GVhoCamQaAmly/streams/CAQaAmpkGgJpciCK5Lr7BCgC': offset 121334 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 12, 2022 12:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-12T12:47:14.897Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFI2ZDBIUXNwaS1GVhoCamQaAmly/streams/GgJqZBoCaXIg7pDGyAUoAg"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFI2ZDBIUXNwaS1GVhoCamQaAmly/streams/GgJqZBoCaXIg7pDGyAUoAg': offset 112290 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFI2ZDBIUXNwaS1GVhoCamQaAmly/streams/GgJqZBoCaXIg7pDGyAUoAg': offset 112290 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 12, 2022 12:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:47:17.428Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 12, 2022 12:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:47:17.594Z: Cleaning up.
    Feb 12, 2022 12:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:47:17.659Z: Stopping worker pool...
    Feb 12, 2022 12:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:49:43.276Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 12, 2022 12:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T12:49:43.312Z: Worker pool stopped.
    Feb 12, 2022 12:49:48 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-12_04_45_16-18380373121823829936 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d5010705-7288-49b2-8f9e-1e7051cd33c4 and timestamp: 2022-02-12T12:49:48.560000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.923

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 12, 2022 12:49:48 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 51.379 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 16s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/jll2sesgwg25w

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3022

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3022/display/redirect?page=changes>

Changes:

[noreply] Minor: Fix link to nexmark benchmarks (#16803)

[noreply] Regenerate python container base_image_requirements.txt (#16832)


------------------------------------------
[...truncated 367.48 KB...]
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1KVi0zWmNLRll6ZhoCamQaAmly/streams/CAEaAmpkGgJpciDTvaD_BCgC': offset 100800 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 12, 2022 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-12T06:47:01.472Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDC1KVi0zWmNLRll6ZhoCamQaAmly/streams/CAMaAmpkGgJpciDKyoinBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1KVi0zWmNLRll6ZhoCamQaAmly/streams/CAMaAmpkGgJpciDKyoinBygC': offset 89803 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1KVi0zWmNLRll6ZhoCamQaAmly/streams/CAMaAmpkGgJpciDKyoinBygC': offset 89803 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 12, 2022 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-12T06:47:02.475Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDC1KVi0zWmNLRll6ZhoCamQaAmly/streams/CAkaAmpkGgJpciDo75GgAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1KVi0zWmNLRll6ZhoCamQaAmly/streams/CAkaAmpkGgJpciDo75GgAigC': offset 83735 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1KVi0zWmNLRll6ZhoCamQaAmly/streams/CAkaAmpkGgJpciDo75GgAigC': offset 83735 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 12, 2022 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-12T06:47:02.716Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDC1KVi0zWmNLRll6ZhoCamQaAmly/streams/CAUaAmpkGgJpciDDuJTOBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1KVi0zWmNLRll6ZhoCamQaAmly/streams/CAUaAmpkGgJpciDDuJTOBygC': offset 93634 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1KVi0zWmNLRll6ZhoCamQaAmly/streams/CAUaAmpkGgJpciDDuJTOBygC': offset 93634 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 12, 2022 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T06:47:05.324Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 12, 2022 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T06:47:05.654Z: Cleaning up.
    Feb 12, 2022 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T06:47:05.819Z: Stopping worker pool...
    Feb 12, 2022 6:49:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T06:49:36.234Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 12, 2022 6:49:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-12T06:49:36.321Z: Worker pool stopped.
    Feb 12, 2022 6:49:44 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-11_22_45_07-1772471740197724607 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8c958af0-d0d5-412e-945b-84ba243e3f89 and timestamp: 2022-02-12T06:49:44.048000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.173

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 12, 2022 6:49:44 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 55.748 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 22s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/4tgk3nx6cnyz6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3021

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3021/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13834] Increase influxDB persistent storage. (#16817)


------------------------------------------
[...truncated 348.53 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 99eac2901efbbfb05d323a3463eef577
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 11, 2022 7:28:59 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 11, 2022 7:28:59 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 11, 2022 7:29:00 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 11, 2022 7:29:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 11, 2022 7:29:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 11, 2022 7:29:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 11, 2022 7:29:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 11, 2022 7:29:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 11, 2022 7:29:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 11, 2022 7:29:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 11, 2022 7:29:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 11, 2022 7:29:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 11, 2022 7:29:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 11, 2022 7:29:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 11, 2022 7:29:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 11, 2022 7:29:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 11, 2022 7:29:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@635097653]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 11, 2022 7:29:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 11, 2022 7:29:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 11, 2022 7:29:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 11, 2022 7:29:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 11, 2022 7:29:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 11, 2022 7:29:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 11, 2022 7:29:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 11, 2022 7:29:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 11, 2022 7:29:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 11, 2022 7:29:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 11, 2022 7:29:11 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-aiNHdtykAIX80LKys347oq_viw_HTVuLfbzd72BjsUc.jar
    Feb 11, 2022 7:29:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4948337523126738026.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-aRevFNVF1MrOu1HYaHK4ECIAzb5FP9rgynJldkck_7Y.jar
    Feb 11, 2022 7:29:12 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 11, 2022 7:29:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 11, 2022 7:29:12 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143949 bytes, hash f108ad4e9f1fbda6bbe2b2c77380e5ad6caa3cf7fd0213da2f563051ca3067e5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8QitTp8fvaa74rLHc4DlrWyqPPf9AhPaL1YwUcowZ-U.pb
    Feb 11, 2022 7:29:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 11, 2022 7:29:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 11, 2022 7:29:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 11, 2022 7:29:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 11, 2022 7:29:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-11_11_29_15-8765043542014101736?project=apache-beam-testing
    Feb 11, 2022 7:29:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-11_11_29_15-8765043542014101736
    Feb 11, 2022 7:29:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-11_11_29_15-8765043542014101736
    Feb 11, 2022 7:29:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-11T19:29:19.020Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 11, 2022 7:29:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:29:26.123Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 11, 2022 7:29:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:29:26.885Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 11, 2022 7:29:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:29:26.927Z: Expanding GroupByKey operations into optimizable parts.
    Feb 11, 2022 7:29:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:29:26.955Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 11, 2022 7:29:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:29:27.018Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 11, 2022 7:29:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:29:27.035Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 11, 2022 7:29:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:29:27.060Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 11, 2022 7:29:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:29:27.353Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 11, 2022 7:29:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:29:27.442Z: Starting 5 workers in us-central1-c...
    Feb 11, 2022 7:29:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:29:45.671Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 11, 2022 7:30:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:30:11.289Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 11, 2022 7:30:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:30:34.543Z: Workers have started successfully.
    Feb 11, 2022 7:30:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:30:34.572Z: Workers have started successfully.
    Feb 11, 2022 7:31:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:31:06.415Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 11, 2022 7:31:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:31:06.572Z: Cleaning up.
    Feb 11, 2022 7:31:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:31:06.656Z: Stopping worker pool...
    Feb 11, 2022 7:33:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:33:30.135Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 11, 2022 7:33:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T19:33:30.231Z: Worker pool stopped.
    Feb 11, 2022 7:33:37 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-11_11_29_15-8765043542014101736 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b77d738f-78d5-4fcb-9ff9-d808ea44c92f and timestamp: 2022-02-11T19:33:37.334000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.107

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 11, 2022 7:33:37 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 42.236 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 33s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/tczjjhy54k4ze

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3020

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3020/display/redirect?page=changes>

Changes:

[Carl Yeksigian] Cache bucket matcher regex in GcsPath

[noreply] [BEAM-4032]Support staging binary distributions of dependency packages


------------------------------------------
[...truncated 423.13 KB...]
    INFO: 2022-02-11T18:15:08.166Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 11, 2022 6:15:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:15:09.017Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 11, 2022 6:15:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:15:09.047Z: Expanding GroupByKey operations into optimizable parts.
    Feb 11, 2022 6:15:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:15:09.089Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 11, 2022 6:15:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:15:09.149Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 11, 2022 6:15:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:15:09.229Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 11, 2022 6:15:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:15:09.264Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 11, 2022 6:15:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:15:09.591Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 11, 2022 6:15:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:15:09.668Z: Starting 5 workers in us-central1-c...
    Feb 11, 2022 6:15:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:15:22.166Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 11, 2022 6:15:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:15:54.252Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 11, 2022 6:16:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:16:19.820Z: Workers have started successfully.
    Feb 11, 2022 6:16:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:16:19.856Z: Workers have started successfully.
    Feb 11, 2022 6:16:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-11T18:16:50.294Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHNNX0hOLTlZbkdtVxoCamQaAmly/streams/CAUaAmpkGgJpciCdv47RAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHNNX0hOLTlZbkdtVxoCamQaAmly/streams/CAUaAmpkGgJpciCdv47RAigC': offset 109722 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHNNX0hOLTlZbkdtVxoCamQaAmly/streams/CAUaAmpkGgJpciCdv47RAigC': offset 109722 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 11, 2022 6:16:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-11T18:16:51.368Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHNNX0hOLTlZbkdtVxoCamQaAmly/streams/CAgaAmpkGgJpciDFl7yiAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHNNX0hOLTlZbkdtVxoCamQaAmly/streams/CAgaAmpkGgJpciDFl7yiAigC': offset 127082 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHNNX0hOLTlZbkdtVxoCamQaAmly/streams/CAgaAmpkGgJpciDFl7yiAigC': offset 127082 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 11, 2022 6:16:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-11T18:16:51.369Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHNNX0hOLTlZbkdtVxoCamQaAmly/streams/CAcaAmpkGgJpciCVw8N3KAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHNNX0hOLTlZbkdtVxoCamQaAmly/streams/CAcaAmpkGgJpciCVw8N3KAI': offset 116053 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHNNX0hOLTlZbkdtVxoCamQaAmly/streams/CAcaAmpkGgJpciCVw8N3KAI': offset 116053 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 11, 2022 6:16:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:16:55.227Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 11, 2022 6:16:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:16:55.372Z: Cleaning up.
    Feb 11, 2022 6:16:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:16:55.446Z: Stopping worker pool...
    Feb 11, 2022 6:19:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:19:18.874Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 11, 2022 6:19:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T18:19:18.915Z: Worker pool stopped.
    Feb 11, 2022 6:19:24 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-11_10_14_57-16615540121006598750 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 47c868be-44a6-49ae-aafd-151f7f67ace5 and timestamp: 2022-02-11T18:19:25.004000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    11.161

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 11, 2022 6:19:25 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 49.889 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 28s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/hzdi5lxhtbmwk

Build cache (/home/jenkins/.gradle/caches/build-cache-1) removing files not accessed on or after Fri Feb 04 18:13:05 UTC 2022.
Invalidating in-memory cache of /home/jenkins/.gradle/caches/journal-1/file-access.bin
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleanup deleted 3293 files/directories.
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleaned up in 14.427 secs.
Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3019

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3019/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13908] [Coverage] Better testing coverage for gcpopts (#16816)

[noreply] Merge pull request #16809 from [BEAM-12164] Added integration test for


------------------------------------------
[...truncated 353.77 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 55a8bb56ab0abeec486ad9ed65966270
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 11, 2022 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 11, 2022 6:45:13 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 11, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 11, 2022 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 11, 2022 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 11, 2022 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 11, 2022 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 11, 2022 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 11, 2022 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 11, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 11, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 11, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 11, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 11, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 11, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 11, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 11, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 11, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 11, 2022 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 11, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 11, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 11, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 11, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 11, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 11, 2022 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 11, 2022 6:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 11, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 11, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT-aiNHdtykAIX80LKys347oq_viw_HTVuLfbzd72BjsUc.jar
    Feb 11, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test713604900503593021.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-mYn0CgndZpDtX6gZeZvQ8Xd5S-acdVvRLGtvOGXkUww.jar
    Feb 11, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 11, 2022 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 11, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143948 bytes, hash 8633419c982498f0497eae9cace3db04d0df01e9d5e8cc9d7d9592409726a822> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-hjNBnJgkmPBJfq6crOPbBNDfAenV6MydfZWSQJcmqCI.pb
    Feb 11, 2022 6:45:29 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 11, 2022 6:45:29 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 11, 2022 6:45:29 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 11, 2022 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 11, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-10_22_45_29-14711654545166510142?project=apache-beam-testing
    Feb 11, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-10_22_45_29-14711654545166510142
    Feb 11, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-10_22_45_29-14711654545166510142
    Feb 11, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-11T06:45:33.113Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 11, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:45:40.955Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 11, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:45:41.652Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 11, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:45:41.726Z: Expanding GroupByKey operations into optimizable parts.
    Feb 11, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:45:41.761Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 11, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:45:41.834Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 11, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:45:41.860Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 11, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:45:41.883Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 11, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:45:42.204Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 11, 2022 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:45:42.308Z: Starting 5 workers in us-central1-c...
    Feb 11, 2022 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:45:47.352Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 11, 2022 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:46:26.074Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 11, 2022 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:46:53.950Z: Workers have started successfully.
    Feb 11, 2022 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:46:53.978Z: Workers have started successfully.
    Feb 11, 2022 6:47:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:47:21.458Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 11, 2022 6:47:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:47:21.630Z: Cleaning up.
    Feb 11, 2022 6:47:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:47:21.712Z: Stopping worker pool...
    Feb 11, 2022 6:49:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:49:50.345Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 11, 2022 6:49:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T06:49:50.410Z: Worker pool stopped.
    Feb 11, 2022 6:49:55 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-10_22_45_29-14711654545166510142 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ea697c5b-7692-4f61-bd44-1db6e8e39554 and timestamp: 2022-02-11T06:49:55.850000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.171

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 11, 2022 6:49:55 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 46.618 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/tfg6yy54u2rky

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3018

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3018/display/redirect?page=changes>

Changes:

[benjamin.gonzalez] [BEAM-12672] Retry flaky tests

[benjamin.gonzalez] [BEAM-12672] Fix spotlessApply

[laraschmidt] Fixing the log line to properly handle a large allowed skew.

[noreply] [BEAM-12000] Fix typo in portable Python job definition (#16812)

[noreply] [BEAM-12164]: Fixes SpannerChangeStreamIT (#16806)

[noreply] [BEAM-12572] Fix failures in python examples tests (#16781)

[noreply] [BEAM-13921] filter out debeziumIO test for spark runner (#16815)

[noreply] [BEAM-13855] Skip SpannerChangeStreamOrderedWithinKeyIT and

[noreply] [BEAM-13679] playground - move quick start category to the top (#16808)

[noreply] Update license_script.sh (#16789)


------------------------------------------
[...truncated 370.90 KB...]
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEIwbHozUFZYUUtVZBoCamQaAmly/streams/CAEaAmpkGgJpciCg3-LYAygC': offset 100415 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 11, 2022 12:55:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-11T00:55:20.711Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEIwbHozUFZYUUtVZBoCamQaAmly/streams/GgJqZBoCaXIgsv6m1gIoAg"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEIwbHozUFZYUUtVZBoCamQaAmly/streams/GgJqZBoCaXIgsv6m1gIoAg': offset 108336 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEIwbHozUFZYUUtVZBoCamQaAmly/streams/GgJqZBoCaXIgsv6m1gIoAg': offset 108336 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 11, 2022 12:55:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-11T00:55:21.260Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEIwbHozUFZYUUtVZBoCamQaAmly/streams/CAcaAmpkGgJpciD6w4XRBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEIwbHozUFZYUUtVZBoCamQaAmly/streams/CAcaAmpkGgJpciD6w4XRBSgC': offset 80342 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEIwbHozUFZYUUtVZBoCamQaAmly/streams/CAcaAmpkGgJpciD6w4XRBSgC': offset 80342 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 11, 2022 12:55:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-11T00:55:21.264Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEIwbHozUFZYUUtVZBoCamQaAmly/streams/CAUaAmpkGgJpciDso-LCAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEIwbHozUFZYUUtVZBoCamQaAmly/streams/CAUaAmpkGgJpciDso-LCAygC': offset 75179 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEIwbHozUFZYUUtVZBoCamQaAmly/streams/CAUaAmpkGgJpciDso-LCAygC': offset 75179 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 11, 2022 12:55:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T00:55:26.081Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 11, 2022 12:55:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T00:55:26.222Z: Cleaning up.
    Feb 11, 2022 12:55:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T00:55:26.296Z: Stopping worker pool...
    Feb 11, 2022 12:57:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T00:57:56.718Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 11, 2022 12:57:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-11T00:57:56.998Z: Worker pool stopped.
    Feb 11, 2022 12:58:19 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-10_16_53_20-17733241814475739666 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6bb0c1f9-ba5f-43c3-9c04-417e589ec6b8 and timestamp: 2022-02-11T00:58:19.188000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    13.057

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 11, 2022 12:58:19 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 5 mins 19.924 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 10s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/haq7ccwatpacg

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3017

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3017/display/redirect?page=changes>

Changes:

[n] BEAM-13159 Update embedded-redis dependency

[n] address comments

[noreply] Minor: Add 2.38.0 section to CHANGES.md (#16804)


------------------------------------------
Started by timer
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on apache-beam-jenkins-2 (beam) in workspace <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/>
The recommended git tool is: NONE
No credentials specified
Wiping out workspace first.
Cloning the remote Git repository
Cloning repository https://github.com/apache/beam.git
 > git init <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src> # timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git --version # timeout=10
 > git --version # 'git version 2.25.1'
 > git fetch --tags --force --progress -- https://github.com/apache/beam.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git fetch --tags --force --progress -- https://github.com/apache/beam.git +refs/heads/*:refs/remotes/origin/* +refs/pull/${ghprbPullId}/*:refs/remotes/origin/pr/${ghprbPullId}/* # timeout=10
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 0e057fd71f96a12d74307799adabf63da9003d11 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 0e057fd71f96a12d74307799adabf63da9003d11 # timeout=10
Commit message: "Merge pull request #16798: [BEAM-13159] Update embedded-redis dependency"
 > git rev-list --no-walk 8b213c617ef8cf3a077bb0002b6b0fec8e85cb05 # timeout=10
No emails were triggered.
[EnvInject] - Executing scripts and injecting environment variables after the SCM step.
[EnvInject] - Injecting as environment variables the properties content 
SPARK_LOCAL_IP=127.0.0.1
SETUPTOOLS_USE_DISTUTILS=stdlib

[EnvInject] - Variables injected successfully.
[Gradle] - Launching build.
[src] $ <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/gradlew> --continue --max-workers=12 -Dorg.gradle.jvmargs=-Xms2g -Dorg.gradle.jvmargs=-Xmx4g -Dorg.gradle.vfs.watch=false -Pdocker-pull-licenses --info -DintegrationTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE"] -DintegrationTestRunner=dataflow :sdks:java:extensions:sql:perf-tests:integrationTest --tests org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
Initialized native services in: /home/jenkins/.gradle/native
Initialized jansi services in: /home/jenkins/.gradle/native
The client will now receive all logging from the daemon (pid: 752286). The daemon log file: /home/jenkins/.gradle/daemon/7.3.2/daemon-752286.out.log
Starting 3rd build in daemon [uptime: 14 mins 10.832 secs, performance: 98%, GC rate: 0.02/s, heap usage: 1% of 2.6 GiB]
Closing daemon's stdin at end of input.
The daemon will no longer process any standard input.
Using 12 worker leases.
Configuration on demand is an incubating feature.
Watching the file system is configured to be disabled
File system watching is inactive
Starting Build
Invalidating in-memory cache of /home/jenkins/.gradle/caches/journal-1/file-access.bin
Invalidating in-memory cache of /home/jenkins/.gradle/caches/7.3.2/fileHashes/fileHashes.bin
Invalidating in-memory cache of /home/jenkins/.gradle/caches/7.3.2/fileHashes/resourceHashesCache.bin
Settings evaluated using settings file '<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/settings.gradle.kts'.>
Using local directory build cache for the root build (location = /home/jenkins/.gradle/caches/build-cache-1, removeUnusedEntriesAfter = 7 days).
Projects loaded. Root project using build file '<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/build.gradle.kts'.>
Included projects: [root project 'beam', project ':beam-test-infra-metrics', project ':beam-test-jenkins', project ':beam-test-tools', project ':beam-validate-runner', project ':examples', project ':model', project ':playground', project ':release', project ':runners', project ':sdks', project ':vendor', project ':website', project ':examples:java', project ':examples:kotlin', project ':examples:multi-language', project ':model:fn-execution', project ':model:job-management', project ':model:pipeline', project ':playground:backend', project ':playground:frontend', project ':release:go-licenses', project ':runners:core-construction-java', project ':runners:core-java', project ':runners:direct-java', project ':runners:extensions-java', project ':runners:flink', project ':runners:google-cloud-dataflow-java', project ':runners:java-fn-execution', project ':runners:java-job-service', project ':runners:jet', project ':runners:local-java', project ':runners:portability', project ':runners:samza', project ':runners:spark', project ':runners:twister2', project ':sdks:go', project ':sdks:java', project ':sdks:python', project ':vendor:bytebuddy-1_11_0', project ':vendor:calcite-1_28_0', project ':vendor:grpc-1_43_2', project ':vendor:guava-26_0-jre', project ':examples:java:twitter', project ':playground:backend:containers', project ':release:go-licenses:go', project ':release:go-licenses:java', project ':release:go-licenses:py', project ':runners:extensions-java:metrics', project ':runners:flink:1.11', project ':runners:flink:1.12', project ':runners:flink:1.13', project ':runners:google-cloud-dataflow-java:examples', project ':runners:google-cloud-dataflow-java:examples-streaming', project ':runners:google-cloud-dataflow-java:worker', project ':runners:portability:java', project ':runners:samza:job-server', project ':runners:spark:2', project ':runners:spark:3', project ':sdks:go:container', project ':sdks:go:examples', project ':sdks:go:test', project ':sdks:java:bom', project ':sdks:java:build-tools', project ':sdks:java:container', project ':sdks:java:core', project ':sdks:java:expansion-service', project ':sdks:java:extensions', project ':sdks:java:fn-execution', project ':sdks:java:harness', project ':sdks:java:io', project ':sdks:java:javadoc', project ':sdks:java:maven-archetypes', project ':sdks:java:testing', project ':sdks:python:apache_beam', project ':sdks:python:container', project ':sdks:python:test-suites', project ':playground:backend:containers:go', project ':playground:backend:containers:java', project ':playground:backend:containers:python', project ':playground:backend:containers:router', project ':playground:backend:containers:scio', project ':runners:flink:1.11:job-server', project ':runners:flink:1.11:job-server-container', project ':runners:flink:1.12:job-server', project ':runners:flink:1.12:job-server-container', project ':runners:flink:1.13:job-server', project ':runners:flink:1.13:job-server-container', project ':runners:google-cloud-dataflow-java:worker:legacy-worker', project ':runners:google-cloud-dataflow-java:worker:windmill', project ':runners:spark:2:job-server', project ':runners:spark:3:job-server', project ':sdks:go:test:load', project ':sdks:java:bom:gcp', project ':sdks:java:container:java11', project ':sdks:java:container:java17', project ':sdks:java:container:java8', project ':sdks:java:expansion-service:app', project ':sdks:java:extensions:arrow', project ':sdks:java:extensions:euphoria', project ':sdks:java:extensions:google-cloud-platform-core', project ':sdks:java:extensions:jackson', project ':sdks:java:extensions:join-library', project ':sdks:java:extensions:kryo', project ':sdks:java:extensions:ml', project ':sdks:java:extensions:protobuf', project ':sdks:java:extensions:sbe', project ':sdks:java:extensions:schemaio-expansion-service', project ':sdks:java:extensions:sketching', project ':sdks:java:extensions:sorter', project ':sdks:java:extensions:sql', project ':sdks:java:extensions:zetasketch', project ':sdks:java:harness:jmh', project ':sdks:java:io:amazon-web-services', project ':sdks:java:io:amazon-web-services2', project ':sdks:java:io:amqp', project ':sdks:java:io:azure', project ':sdks:java:io:bigquery-io-perf-tests', project ':sdks:java:io:cassandra', project ':sdks:java:io:clickhouse', project ':sdks:java:io:common', project ':sdks:java:io:contextualtextio', project ':sdks:java:io:debezium', project ':sdks:java:io:elasticsearch', project ':sdks:java:io:elasticsearch-tests', project ':sdks:java:io:expansion-service', project ':sdks:java:io:file-based-io-tests', project ':sdks:java:io:google-cloud-platform', project ':sdks:java:io:hadoop-common', project ':sdks:java:io:hadoop-file-system', project ':sdks:java:io:hadoop-format', project ':sdks:java:io:hbase', project ':sdks:java:io:hcatalog', project ':sdks:java:io:influxdb', project ':sdks:java:io:jdbc', project ':sdks:java:io:jms', project ':sdks:java:io:kafka', project ':sdks:java:io:kinesis', project ':sdks:java:io:kudu', project ':sdks:java:io:mongodb', project ':sdks:java:io:mqtt', project ':sdks:java:io:parquet', project ':sdks:java:io:rabbitmq', project ':sdks:java:io:redis', project ':sdks:java:io:snowflake', project ':sdks:java:io:solr', project ':sdks:java:io:splunk', project ':sdks:java:io:synthetic', project ':sdks:java:io:thrift', project ':sdks:java:io:tika', project ':sdks:java:io:xml', project ':sdks:java:maven-archetypes:examples', project ':sdks:java:maven-archetypes:gcp-bom-examples', project ':sdks:java:maven-archetypes:starter', project ':sdks:java:testing:expansion-service', project ':sdks:java:testing:jpms-tests', project ':sdks:java:testing:kafka-service', project ':sdks:java:testing:load-tests', project ':sdks:java:testing:nexmark', project ':sdks:java:testing:test-utils', project ':sdks:java:testing:tpcds', project ':sdks:java:testing:watermarks', project ':sdks:python:apache_beam:testing', project ':sdks:python:container:py36', project ':sdks:python:container:py37', project ':sdks:python:container:py38', project ':sdks:python:container:py39', project ':sdks:python:test-suites:dataflow', project ':sdks:python:test-suites:direct', project ':sdks:python:test-suites:portable', project ':sdks:python:test-suites:tox', project ':runners:spark:2:job-server:container', project ':runners:spark:3:job-server:container', project ':sdks:java:extensions:sql:datacatalog', project ':sdks:java:extensions:sql:expansion-service', project ':sdks:java:extensions:sql:hcatalog', project ':sdks:java:extensions:sql:jdbc', project ':sdks:java:extensions:sql:payloads', project ':sdks:java:extensions:sql:perf-tests', project ':sdks:java:extensions:sql:shell', project ':sdks:java:extensions:sql:udf', project ':sdks:java:extensions:sql:udf-test-provider', project ':sdks:java:extensions:sql:zetasql', project ':sdks:java:io:debezium:expansion-service', project ':sdks:java:io:elasticsearch-tests:elasticsearch-tests-2', project ':sdks:java:io:elasticsearch-tests:elasticsearch-tests-5', project ':sdks:java:io:elasticsearch-tests:elasticsearch-tests-6', project ':sdks:java:io:elasticsearch-tests:elasticsearch-tests-7', project ':sdks:java:io:elasticsearch-tests:elasticsearch-tests-common', project ':sdks:java:io:google-cloud-platform:expansion-service', project ':sdks:java:io:kinesis:expansion-service', project ':sdks:java:io:snowflake:expansion-service', project ':sdks:python:apache_beam:testing:load_tests', project ':sdks:python:test-suites:dataflow:py36', project ':sdks:python:test-suites:dataflow:py37', project ':sdks:python:test-suites:dataflow:py38', project ':sdks:python:test-suites:dataflow:py39', project ':sdks:python:test-suites:direct:py36', project ':sdks:python:test-suites:direct:py37', project ':sdks:python:test-suites:direct:py38', project ':sdks:python:test-suites:direct:py39', project ':sdks:python:test-suites:direct:xlang', project ':sdks:python:test-suites:portable:py36', project ':sdks:python:test-suites:portable:py37', project ':sdks:python:test-suites:portable:py38', project ':sdks:python:test-suites:portable:py39', project ':sdks:python:test-suites:tox:py36', project ':sdks:python:test-suites:tox:py37', project ':sdks:python:test-suites:tox:py38', project ':sdks:python:test-suites:tox:py39', project ':sdks:python:test-suites:tox:pycommon']

> Configure project :buildSrc
Evaluating project ':buildSrc' using build file '<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/buildSrc/build.gradle.kts'.>
Build cache key for Kotlin DSL accessors for project ':buildSrc' is 4da4b0723b7c8d030b96881b71ab6f8b
Skipping Kotlin DSL accessors for project ':buildSrc' as it is up-to-date.
Selected primary task 'build' from project :
:buildSrc:compileJava (Thread[Execution worker for ':buildSrc',5,main]) started.

> Task :buildSrc:compileJava NO-SOURCE
file or directory '<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/buildSrc/src/main/java',> not found
Skipping task ':buildSrc:compileJava' as it has no source files and no previous output files.
:buildSrc:compileJava (Thread[Execution worker for ':buildSrc',5,main]) completed. Took 0.207 secs.
:buildSrc:compileGroovy (Thread[Execution worker for ':buildSrc',5,main]) started.

> Task :buildSrc:compileGroovy
Build cache key for task ':buildSrc:compileGroovy' is d2673c35376e7b534fbb9ad0c6042586
Task ':buildSrc:compileGroovy' is not up-to-date because:
  No history is available.

> Task :buildSrc:compileGroovy FAILED
:buildSrc:compileGroovy (Thread[Execution worker for ':buildSrc',5,main]) completed. Took 1 mins 0.577 secs.
:buildSrc:pluginDescriptors (Thread[Execution worker for ':buildSrc',5,main]) started.

> Task :buildSrc:pluginDescriptors
Caching disabled for task ':buildSrc:pluginDescriptors' because:
  Not worth caching
Task ':buildSrc:pluginDescriptors' is not up-to-date because:
  No history is available.
:buildSrc:pluginDescriptors (Thread[Execution worker for ':buildSrc',5,main]) completed. Took 0.014 secs.
:buildSrc:processResources (Thread[Execution worker for ':buildSrc' Thread 11,5,main]) started.

> Task :buildSrc:processResources
file or directory '<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/buildSrc/src/main/resources',> not found
Caching disabled for task ':buildSrc:processResources' because:
  Not worth caching
Task ':buildSrc:processResources' is not up-to-date because:
  No history is available.
file or directory '<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/buildSrc/src/main/resources',> not found
:buildSrc:processResources (Thread[Execution worker for ':buildSrc' Thread 11,5,main]) completed. Took 0.056 secs.
:buildSrc:spotlessGroovy (Thread[Execution worker for ':buildSrc' Thread 11,5,main]) started.

> Task :buildSrc:spotlessGroovy
file or directory '<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/buildSrc/src/test/groovy',> not found
file or directory '<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/buildSrc/src/test/groovy',> not found
Build cache key for task ':buildSrc:spotlessGroovy' is f59ca3c78f61c5b486895d2c59bee9ea
Task ':buildSrc:spotlessGroovy' is not up-to-date because:
  No history is available.
Invalidating in-memory cache of /home/jenkins/.gradle/caches/journal-1/file-access.bin
Loaded cache entry for task ':buildSrc:spotlessGroovy' with cache key f59ca3c78f61c5b486895d2c59bee9ea

> Task :buildSrc:spotlessGroovy FROM-CACHE
:buildSrc:spotlessGroovy (Thread[Execution worker for ':buildSrc' Thread 11,5,main]) completed. Took 25.954 secs.
:buildSrc:spotlessGroovyCheck (Thread[Execution worker for ':buildSrc' Thread 11,5,main]) started.

> Task :buildSrc:spotlessGroovyCheck UP-TO-DATE
Caching disabled for task ':buildSrc:spotlessGroovyCheck' because:
  Caching has not been enabled for the task
Task ':buildSrc:spotlessGroovyCheck' is not up-to-date because:
  Task has not declared any outputs despite executing actions.
:buildSrc:spotlessGroovyCheck (Thread[Execution worker for ':buildSrc' Thread 11,5,main]) completed. Took 0.03 secs.
:buildSrc:spotlessGroovyGradle (Thread[Execution worker for ':buildSrc' Thread 7,5,main]) started.

> Task :buildSrc:spotlessGroovyGradle FROM-CACHE
Build cache key for task ':buildSrc:spotlessGroovyGradle' is 4492d2e9ab6abfbdba67c84cbe6c0d91
Task ':buildSrc:spotlessGroovyGradle' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':buildSrc:spotlessGroovyGradle' with cache key 4492d2e9ab6abfbdba67c84cbe6c0d91
:buildSrc:spotlessGroovyGradle (Thread[Execution worker for ':buildSrc' Thread 7,5,main]) completed. Took 0.082 secs.
:buildSrc:spotlessGroovyGradleCheck (Thread[Execution worker for ':buildSrc' Thread 11,5,main]) started.

> Task :buildSrc:spotlessGroovyGradleCheck UP-TO-DATE
Caching disabled for task ':buildSrc:spotlessGroovyGradleCheck' because:
  Caching has not been enabled for the task
Task ':buildSrc:spotlessGroovyGradleCheck' is not up-to-date because:
  Task has not declared any outputs despite executing actions.
:buildSrc:spotlessGroovyGradleCheck (Thread[Execution worker for ':buildSrc' Thread 11,5,main]) completed. Took 0.002 secs.
:buildSrc:spotlessCheck (Thread[Execution worker for ':buildSrc' Thread 11,5,main]) started.

> Task :buildSrc:spotlessCheck UP-TO-DATE
Skipping task ':buildSrc:spotlessCheck' as it has no actions.
:buildSrc:spotlessCheck (Thread[Execution worker for ':buildSrc' Thread 11,5,main]) completed. Took 0.0 secs.
:buildSrc:processTestResources (Thread[Execution worker for ':buildSrc' Thread 11,5,main]) started.

> Task :buildSrc:processTestResources NO-SOURCE
file or directory '<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/buildSrc/src/test/resources',> not found
Skipping task ':buildSrc:processTestResources' as it has no source files and no previous output files.
:buildSrc:processTestResources (Thread[Execution worker for ':buildSrc' Thread 11,5,main]) completed. Took 0.001 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':buildSrc:compileGroovy'.
> Failed to load cache entry for task ':buildSrc:compileGroovy'

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 1m 39s
7 actionable tasks: 3 executed, 2 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/jxuzznvwlkg2c

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3016

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3016/display/redirect>

Changes:


------------------------------------------
[...truncated 347.32 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 07b1350188b7fd304211b17a2ad78ad6
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 10, 2022 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 10, 2022 12:45:12 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 10, 2022 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 10, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 10, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 10, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 10, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 10, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 10, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 10, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 10, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 10, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 10, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 10, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 10, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 10, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 10, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 10, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 10, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 10, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 10, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 10, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 10, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 10, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 10, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 10, 2022 12:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 10, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 10, 2022 12:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT--9wPQyO2orjaFsHLajvxft2loD9qCcqX0M-lctjvE8A.jar
    Feb 10, 2022 12:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test494975704333446634.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-GCY89iR8V7henHdniCR4Hwk2Lfb73D07kk6g2sHa3GE.jar
    Feb 10, 2022 12:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 10, 2022 12:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 10, 2022 12:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143949 bytes, hash 62af15441379a1a84c824c3693aa9bf2af0854062a677d1df59c5f93722da66e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Yq8VRBN5oahMgkw2k6qb8q8IVAYqZ30d9Zxfk3Itpm4.pb
    Feb 10, 2022 12:45:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 10, 2022 12:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 10, 2022 12:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 10, 2022 12:45:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 10, 2022 12:45:31 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-10_04_45_28-11942263831246316054?project=apache-beam-testing
    Feb 10, 2022 12:45:31 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-10_04_45_28-11942263831246316054
    Feb 10, 2022 12:45:31 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-10_04_45_28-11942263831246316054
    Feb 10, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-10T12:45:34.509Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 10, 2022 12:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:45:42.432Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 10, 2022 12:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:45:43.240Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 10, 2022 12:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:45:43.282Z: Expanding GroupByKey operations into optimizable parts.
    Feb 10, 2022 12:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:45:43.311Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 10, 2022 12:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:45:43.385Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 10, 2022 12:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:45:43.412Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 10, 2022 12:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:45:43.447Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 10, 2022 12:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:45:43.783Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 10, 2022 12:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:45:43.862Z: Starting 5 workers in us-central1-c...
    Feb 10, 2022 12:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:45:47.125Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 10, 2022 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:46:31.338Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 10, 2022 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:46:56.255Z: Workers have started successfully.
    Feb 10, 2022 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:46:56.281Z: Workers have started successfully.
    Feb 10, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:47:29.758Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 10, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:47:29.914Z: Cleaning up.
    Feb 10, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:47:29.992Z: Stopping worker pool...
    Feb 10, 2022 12:50:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:50:20.995Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 10, 2022 12:50:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T12:50:21.029Z: Worker pool stopped.
    Feb 10, 2022 12:50:28 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-10_04_45_28-11942263831246316054 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2ac0eab2-570c-45b7-a77c-2f2177a0912d and timestamp: 2022-02-10T12:50:29.039000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.886

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 10, 2022 12:50:29 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 5 mins 20.902 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 7s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/m4zhb5vnqk6km

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3015

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3015/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13867] Drop NaNs returned by nlargest in flight_delays example

[noreply] Announce Python 3.9 in CHANGES.md (#16802)

[Brian Hulette] Moving to 2.38.0-SNAPSHOT on master branch.

[noreply] [BEAM-11095] Better error handling for iter/reiter/multimap (#16794)


------------------------------------------
[...truncated 433.27 KB...]
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 412 Precondition Failed
    PUT https://storage.googleapis.com/upload/storage/v1/b/temp-storage-for-perf-tests/o?ifGenerationMatch=0&name=loadtests/staging/beam-runners-java-fn-execution-2.38.0-SNAPSHOT-y5AmPZIEquBHSVWmJOM2B6uYCETBjLItz--NU_zQ0mQ.jar&uploadType=resumable&upload_id=ADPycduEbPfxqjk2LAzPTUKuhVI7AQtuopYfWNdOUf-GZXk6HBDDxT1x-smegw9otGJhzJD_-DEaiWv02OipgBBKmJA
    {
      "code" : 412,
      "errors" : [ {
        "domain" : "global",
        "location" : "If-Match",
        "locationType" : "header",
        "message" : "At least one of the pre-conditions you specified did not hold.",
        "reason" : "conditionNotMet"
      } ],
      "message" : "At least one of the pre-conditions you specified did not hold."
    }
    	at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146)
    	at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:118)
    	at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:37)
    	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:532)
    	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:455)
    	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:565)
    	at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel$UploadOperation.call(AbstractGoogleAsyncWriteChannel.java:85)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	... 3 more

    Feb 10, 2022 6:47:04 AM org.apache.beam.sdk.extensions.gcp.util.RetryHttpRequestInitializer$LoggingHttpBackOffHandler handleResponse
    WARNING: Request failed with code 412, performed 0 retries due to IOExceptions, performed 0 retries due to unsuccessful status codes, HTTP framework says request can be retried, (caller responsible for retrying): https://storage.googleapis.com/upload/storage/v1/b/temp-storage-for-perf-tests/o?ifGenerationMatch=0&name=loadtests/staging/beam-runners-direct-java-2.38.0-SNAPSHOT-2OIZDzzW7wSvzHYQUg_mNn-rfhi4ueAqR8NbPDgMmZo.jar&uploadType=resumable&upload_id=ADPycds_Nzp1pm4Vuli99UqTDlsSk5mH46BTYa190K47yFIr8W6gYvp3nJ9W_CGVNRcMUaaYxOIhxTN_d5YaMlIBZHA. 
    Feb 10, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackageWithRetry
    WARNING: Upload attempt failed, sleeping before retrying staging of package: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/direct-java/build/libs/beam-runners-direct-java-2.38.0-SNAPSHOT.jar>
    java.io.IOException: Upload failed for 'gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-direct-java-2.38.0-SNAPSHOT-2OIZDzzW7wSvzHYQUg_mNn-rfhi4ueAqR8NbPDgMmZo.jar'
    	at com.google.cloud.hadoop.util.BaseAbstractGoogleAsyncWriteChannel.waitForCompletionAndThrowIfUploadFailed(BaseAbstractGoogleAsyncWriteChannel.java:260)
    	at com.google.cloud.hadoop.util.BaseAbstractGoogleAsyncWriteChannel.close(BaseAbstractGoogleAsyncWriteChannel.java:168)
    	at org.apache.beam.runners.dataflow.util.PackageUtil.tryStagePackage(PackageUtil.java:227)
    	at org.apache.beam.runners.dataflow.util.PackageUtil.tryStagePackageWithRetry(PackageUtil.java:174)
    	at org.apache.beam.runners.dataflow.util.PackageUtil.stagePackageSynchronously(PackageUtil.java:158)
    	at org.apache.beam.runners.dataflow.util.PackageUtil.lambda$stagePackage$1(PackageUtil.java:142)
    	at org.apache.beam.sdk.util.MoreFutures.lambda$supplyAsync$0(MoreFutures.java:107)
    	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 412 Precondition Failed
    PUT https://storage.googleapis.com/upload/storage/v1/b/temp-storage-for-perf-tests/o?ifGenerationMatch=0&name=loadtests/staging/beam-runners-direct-java-2.38.0-SNAPSHOT-2OIZDzzW7wSvzHYQUg_mNn-rfhi4ueAqR8NbPDgMmZo.jar&uploadType=resumable&upload_id=ADPycds_Nzp1pm4Vuli99UqTDlsSk5mH46BTYa190K47yFIr8W6gYvp3nJ9W_CGVNRcMUaaYxOIhxTN_d5YaMlIBZHA
    {
      "code" : 412,
      "errors" : [ {
        "domain" : "global",
        "location" : "If-Match",
        "locationType" : "header",
        "message" : "At least one of the pre-conditions you specified did not hold.",
        "reason" : "conditionNotMet"
      } ],
      "message" : "At least one of the pre-conditions you specified did not hold."
    }
    	at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146)
    	at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:118)
    	at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:37)
    	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:532)
    	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:455)
    	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:565)
    	at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel$UploadOperation.call(AbstractGoogleAsyncWriteChannel.java:85)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	... 3 more

    Feb 10, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-google-cloud-dataflow-java-legacy-worker-2.38.0-SNAPSHOT--9wPQyO2orjaFsHLajvxft2loD9qCcqX0M-lctjvE8A.jar
    Feb 10, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.38.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Feb 10, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/model/pipeline/build/libs/beam-model-pipeline-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-model-pipeline-2.38.0-SNAPSHOT-4E8VZcemUcIhrxZinVT18wDTQVFrxWZ9AeIUzb19XQM.jar
    Feb 10, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/model/fn-execution/build/libs/beam-model-fn-execution-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-model-fn-execution-2.38.0-SNAPSHOT-xmXkn2j-OD2Lbj5F_veL6dKt7KLxzBy3yuTY-PDi3cE.jar
    Feb 10, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/model/job-management/build/libs/beam-model-job-management-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-model-job-management-2.38.0-SNAPSHOT-lbkQM4Z6zo7XchsGQUWRc5IXSzh0igMpxX5JrUIaNP0.jar
    Feb 10, 2022 6:47:04 AM org.apache.beam.sdk.extensions.gcp.util.RetryHttpRequestInitializer$LoggingHttpBackOffHandler handleResponse
    WARNING: Request failed with code 412, performed 0 retries due to IOExceptions, performed 0 retries due to unsuccessful status codes, HTTP framework says request can be retried, (caller responsible for retrying): https://storage.googleapis.com/upload/storage/v1/b/temp-storage-for-perf-tests/o?ifGenerationMatch=0&name=loadtests/staging/beam-runners-direct-java-2.38.0-SNAPSHOT-tests-k1y9jl4brNGRKf3NNYqvzwx-J_Wl4f_m72GF_b41_KE.jar&uploadType=resumable&upload_id=ADPycds9l1VOLr5y8zKGmWqm0EQUfwW-GbVB2qTeUAWVZOADV8FUlaZdFjQc9PPnzRX7Ii1ITcCwH1awohTYrT3BO4s. 
    Feb 10, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackageWithRetry
    WARNING: Upload attempt failed, sleeping before retrying staging of package: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/direct-java/build/libs/beam-runners-direct-java-2.38.0-SNAPSHOT-tests.jar>
    java.io.IOException: Upload failed for 'gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-direct-java-2.38.0-SNAPSHOT-tests-k1y9jl4brNGRKf3NNYqvzwx-J_Wl4f_m72GF_b41_KE.jar'
    	at com.google.cloud.hadoop.util.BaseAbstractGoogleAsyncWriteChannel.waitForCompletionAndThrowIfUploadFailed(BaseAbstractGoogleAsyncWriteChannel.java:260)
    	at com.google.cloud.hadoop.util.BaseAbstractGoogleAsyncWriteChannel.close(BaseAbstractGoogleAsyncWriteChannel.java:168)
    	at org.apache.beam.runners.dataflow.util.PackageUtil.tryStagePackage(PackageUtil.java:227)
    	at org.apache.beam.runners.dataflow.util.PackageUtil.tryStagePackageWithRetry(PackageUtil.java:174)
    	at org.apache.beam.runners.dataflow.util.PackageUtil.stagePackageSynchronously(PackageUtil.java:158)
    	at org.apache.beam.runners.dataflow.util.PackageUtil.lambda$stagePackage$1(PackageUtil.java:142)
    	at org.apache.beam.sdk.util.MoreFutures.lambda$supplyAsync$0(MoreFutures.java:107)
    	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 412 Precondition Failed
    PUT https://storage.googleapis.com/upload/storage/v1/b/temp-storage-for-perf-tests/o?ifGenerationMatch=0&name=loadtests/staging/beam-runners-direct-java-2.38.0-SNAPSHOT-tests-k1y9jl4brNGRKf3NNYqvzwx-J_Wl4f_m72GF_b41_KE.jar&uploadType=resumable&upload_id=ADPycds9l1VOLr5y8zKGmWqm0EQUfwW-GbVB2qTeUAWVZOADV8FUlaZdFjQc9PPnzRX7Ii1ITcCwH1awohTYrT3BO4s
    {
      "code" : 412,
      "errors" : [ {
        "domain" : "global",
        "location" : "If-Match",
        "locationType" : "header",
        "message" : "At least one of the pre-conditions you specified did not hold.",
        "reason" : "conditionNotMet"
      } ],
      "message" : "At least one of the pre-conditions you specified did not hold."
    }
    	at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146)
    	at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:118)
    	at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:37)
    	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:532)
    	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:455)
    	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:565)
    	at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel$UploadOperation.call(AbstractGoogleAsyncWriteChannel.java:85)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	... 3 more

    Feb 10, 2022 6:47:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/expansion-service/build/libs/beam-sdks-java-expansion-service-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-expansion-service-2.38.0-SNAPSHOT-wVOoc1013OlCJXLLwYQqmrkpqjWmWZ-eEb7dyxuxaZ0.jar
    Feb 10, 2022 6:47:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/java-fn-execution/build/libs/beam-runners-java-fn-execution-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-java-fn-execution-2.38.0-SNAPSHOT-y5AmPZIEquBHSVWmJOM2B6uYCETBjLItz--NU_zQ0mQ.jar
    Feb 10, 2022 6:47:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/core-java/build/libs/beam-runners-core-java-2.38.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-core-java-2.38.0-SNAPSHOT-tests-86YFSrUjHBcps6XL0tlAdfcrI24arbkc7pJN2SIMASs.jar
    Feb 10, 2022 6:47:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/testing/test-utils/build/libs/beam-sdks-java-test-utils-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-test-utils-2.38.0-SNAPSHOT-uDXAbi02WU3WDaNGqETzLc5tbQsW_968W1HOImTwz-s.jar
    Feb 10, 2022 6:47:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/google-cloud-platform-core/build/libs/beam-sdks-java-extensions-google-cloud-platform-core-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-google-cloud-platform-core-2.38.0-SNAPSHOT-qU2gEWd9S4Nei1v26nxKTEzAlV70LrhYnzSBMQD870I.jar
    Feb 10, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/core-java/build/libs/beam-runners-core-java-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-core-java-2.38.0-SNAPSHOT-KVQ8HomOPvtnI-pI8Oepk7xZwKLFipAAl1KPBLRobQc.jar
    Feb 10, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/core-construction-java/build/libs/beam-runners-core-construction-java-2.38.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-core-construction-java-2.38.0-SNAPSHOT-tests-0jRR7JzWHcS4hrBUFC8hu744ndg7hVsCooiAGOovbQc.jar
    Feb 10, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/testing/test-utils/build/libs/beam-sdks-java-test-utils-2.38.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-test-utils-2.38.0-SNAPSHOT-tests-wOIjnLwpsA2382xq4SXCz78v3qXhmFKx1ch-swjvEvM.jar
    Feb 10, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/arrow/build/libs/beam-sdks-java-extensions-arrow-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-arrow-2.38.0-SNAPSHOT--6o0gUFJinkN4upVAkTzRdayVoYWlnjhjcsg2E3us1A.jar
    Feb 10, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/build/libs/beam-runners-google-cloud-dataflow-java-2.38.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-google-cloud-dataflow-java-2.38.0-SNAPSHOT-tests-C9ZVXhSd4qfA8SHZikk_-jnXy4mGsc4IpxdZwPsACZM.jar
    Feb 10, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/protobuf/build/libs/beam-sdks-java-extensions-protobuf-2.38.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-protobuf-2.38.0-SNAPSHOT-tests--UFKXFnjsSCAbTRsxR-C2rXoW1l-kikGxro7KSXKpQo.jar
    Feb 10, 2022 6:47:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/direct-java/build/libs/beam-runners-direct-java-2.38.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-direct-java-2.38.0-SNAPSHOT-tests-k1y9jl4brNGRKf3NNYqvzwx-J_Wl4f_m72GF_b41_KE.jar
    Feb 10, 2022 6:47:11 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/direct-java/build/libs/beam-runners-direct-java-2.38.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-direct-java-2.38.0-SNAPSHOT-2OIZDzzW7wSvzHYQUg_mNn-rfhi4ueAqR8NbPDgMmZo.jar
    Feb 10, 2022 6:47:11 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 333 files cached, 34 files newly uploaded in 7 seconds
    Feb 10, 2022 6:47:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 10, 2022 6:47:11 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143949 bytes, hash e161abf6b235cc6a1e889723968724ea4c9e0233f20666d2f4e345e0ef2dc66c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-4WGr9rI1zGoeiJcjlock6kyeAjPyBmbS9ONF4O8txmw.pb
    Feb 10, 2022 6:47:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 10, 2022 6:47:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 10, 2022 6:47:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 10, 2022 6:47:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.38.0-SNAPSHOT
    Feb 10, 2022 6:47:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-09_22_47_14-6158614009356937155?project=apache-beam-testing
    Feb 10, 2022 6:47:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-09_22_47_14-6158614009356937155
    Feb 10, 2022 6:47:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-09_22_47_14-6158614009356937155
    Feb 10, 2022 6:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-10T06:47:18.680Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 10, 2022 6:47:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:47:27.154Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 10, 2022 6:47:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:47:27.959Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 10, 2022 6:47:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:47:27.991Z: Expanding GroupByKey operations into optimizable parts.
    Feb 10, 2022 6:47:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:47:28.009Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 10, 2022 6:47:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:47:28.071Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 10, 2022 6:47:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:47:28.100Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 10, 2022 6:47:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:47:28.135Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 10, 2022 6:47:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:47:28.447Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 10, 2022 6:47:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:47:28.523Z: Starting 5 workers in us-central1-c...
    Feb 10, 2022 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:47:43.663Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 10, 2022 6:48:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:48:09.860Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 10, 2022 6:48:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:48:35.377Z: Workers have started successfully.
    Feb 10, 2022 6:48:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:48:35.402Z: Workers have started successfully.
    Feb 10, 2022 6:49:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:49:04.191Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 10, 2022 6:49:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:49:04.342Z: Cleaning up.
    Feb 10, 2022 6:49:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:49:04.417Z: Stopping worker pool...
    Feb 10, 2022 6:51:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:51:29.230Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 10, 2022 6:51:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T06:51:29.298Z: Worker pool stopped.
    Feb 10, 2022 6:51:35 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-09_22_47_14-6158614009356937155 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 42df6b98-8bf3-4e4a-936b-c503598ce1cb and timestamp: 2022-02-10T06:51:35.076000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.287

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 10, 2022 6:51:35 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 46.564 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 12s
165 actionable tasks: 111 executed, 52 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/qbbvgosb5seea

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3014

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3014/display/redirect?page=changes>

Changes:

[david.prieto.rivera] Missing contribution

[noreply] Merge pull request #15848 from [BEAM-13835] An any-type implementation

[Valentyn Tymofieiev] [BEAM-12920] Assume that bare generators types define simple generators.

[Valentyn Tymofieiev] Add a container for Python 3.9.

[Valentyn Tymofieiev] Allow job submission with Python 3.9 on Dataflow runner

[Valentyn Tymofieiev] Add Python 3.9 test suites. Keep Dataflow V1 suites unchanged for now.

[Valentyn Tymofieiev] Add py3.9 Github actions suites.

[Valentyn Tymofieiev] Py39 Doc updates.

[Valentyn Tymofieiev] [BEAM-9980] Simplify run_validates_container.sh to avoid branching.

[Valentyn Tymofieiev] Update Cython to a new version that has py39 wheels.

[Valentyn Tymofieiev] [BEAM-13845] Fix comparison with potentially incomparable default

[Valentyn Tymofieiev] [BEAM-12920] Assume that bare generators types define simple generators.

[Valentyn Tymofieiev] Mark Python 3.9 as supported version.

[noreply] [release-2.36.0][website] Fix github release notes script, header for

[noreply] Use shell to run python for setupVirtualenv (#16796)

[Daniel Oliveira] [BEAM-13830] Properly shut down Debezium expansion service in IT script.

[noreply] Merge pull request #16659 from [BEAM-13774][Playground] Add user to

[Valentyn Tymofieiev] [BEAM-13868] Remove gsutil dep from hdfs IT test.

[noreply] [BEAM-13776][Playground] (#16731)


------------------------------------------
[...truncated 356.70 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 550a6bcd68f089239a675a9be47e6dc8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 10, 2022 12:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 10, 2022 12:45:29 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 10, 2022 12:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 366 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 10, 2022 12:45:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 10, 2022 12:45:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 10, 2022 12:45:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 10, 2022 12:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 10, 2022 12:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 10, 2022 12:45:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 10, 2022 12:45:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1829377218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 10, 2022 12:45:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 10, 2022 12:45:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 10, 2022 12:45:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 10, 2022 12:45:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 10, 2022 12:45:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 10, 2022 12:45:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 10, 2022 12:45:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@848490487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 10, 2022 12:45:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 10, 2022 12:45:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 10, 2022 12:45:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 10, 2022 12:45:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 10, 2022 12:45:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 10, 2022 12:45:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 10, 2022 12:45:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 10, 2022 12:45:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 10, 2022 12:45:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 10, 2022 12:45:41 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 367 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 10, 2022 12:45:41 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT--9wPQyO2orjaFsHLajvxft2loD9qCcqX0M-lctjvE8A.jar
    Feb 10, 2022 12:45:41 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4851589697672212348.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-XZsQ3RcxQ90cCSN0YWxj96fhjiPR05b0Xr2SltFjKqg.jar
    Feb 10, 2022 12:45:41 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 366 files cached, 1 files newly uploaded in 0 seconds
    Feb 10, 2022 12:45:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 10, 2022 12:45:41 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <143949 bytes, hash 109eb1f698e2cb6c675df95960be082f4ad15660e7e2b2e436dcc1093db7a987> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-EJ6x9pjiy2xnXflZYL4IL0rRVmDn4rLkNtzBCT23qYc.pb
    Feb 10, 2022 12:45:45 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 10, 2022 12:45:45 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 10, 2022 12:45:45 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 10, 2022 12:45:45 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 10, 2022 12:45:46 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-09_16_45_45-10278418081982220912?project=apache-beam-testing
    Feb 10, 2022 12:45:46 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-09_16_45_45-10278418081982220912
    Feb 10, 2022 12:45:46 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-09_16_45_45-10278418081982220912
    Feb 10, 2022 12:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-10T00:45:57.156Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 10, 2022 12:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:46:07.604Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 10, 2022 12:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:46:08.380Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 10, 2022 12:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:46:08.440Z: Expanding GroupByKey operations into optimizable parts.
    Feb 10, 2022 12:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:46:08.468Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 10, 2022 12:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:46:08.537Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 10, 2022 12:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:46:08.566Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 10, 2022 12:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:46:08.595Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 10, 2022 12:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:46:08.952Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 10, 2022 12:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:46:09.021Z: Starting 5 workers in us-central1-a...
    Feb 10, 2022 12:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:46:31.394Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 10, 2022 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:46:52.130Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 10, 2022 12:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:47:18.896Z: Workers have started successfully.
    Feb 10, 2022 12:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:47:18.931Z: Workers have started successfully.
    Feb 10, 2022 12:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:47:48.591Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 10, 2022 12:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:47:48.752Z: Cleaning up.
    Feb 10, 2022 12:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:47:48.824Z: Stopping worker pool...
    Feb 10, 2022 12:50:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:50:11.091Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 10, 2022 12:50:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-10T00:50:11.159Z: Worker pool stopped.
    Feb 10, 2022 12:50:17 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-09_16_45_45-10278418081982220912 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b77c0308-3347-441e-be4a-49470a3d0ec9 and timestamp: 2022-02-10T00:50:17.857000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.053

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 10, 2022 12:50:17 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 53.083 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 55s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/u2dnci4hrpx3g

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3013

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3013/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13803] Add support for native iterable side inputs to the Go SDK

[noreply] [BEAM-11095] Better error handling for illegal emit functions (#16776)

[noreply] Merge pull request #16613 from Supporting JdbcIO driver in classpath for


------------------------------------------
[...truncated 364.80 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-09_10_48_31-18247325847884750505
    Feb 09, 2022 6:48:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-09T18:48:34.643Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 09, 2022 6:48:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:48:48.564Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 09, 2022 6:48:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:48:49.108Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 09, 2022 6:48:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:48:49.146Z: Expanding GroupByKey operations into optimizable parts.
    Feb 09, 2022 6:48:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:48:49.175Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 09, 2022 6:48:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:48:49.242Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 09, 2022 6:48:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:48:49.276Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 09, 2022 6:48:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:48:49.302Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 09, 2022 6:48:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:48:49.585Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 09, 2022 6:48:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:48:49.661Z: Starting 5 workers in us-central1-c...
    Feb 09, 2022 6:49:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:49:06.327Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 09, 2022 6:49:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:49:34.723Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 09, 2022 6:50:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:49:59.793Z: Workers have started successfully.
    Feb 09, 2022 6:50:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:49:59.820Z: Workers have started successfully.
    Feb 09, 2022 6:50:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-09T18:50:29.125Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEdnekY1ZHNJbWNxShoCamQaAmly/streams/CAYaAmpkGgJpciCDpeibBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEdnekY1ZHNJbWNxShoCamQaAmly/streams/CAYaAmpkGgJpciCDpeibBygC': offset 70782 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEdnekY1ZHNJbWNxShoCamQaAmly/streams/CAYaAmpkGgJpciCDpeibBygC': offset 70782 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 09, 2022 6:50:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-09T18:50:30.238Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEdnekY1ZHNJbWNxShoCamQaAmly/streams/GgJqZBoCaXIgheiqNSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEdnekY1ZHNJbWNxShoCamQaAmly/streams/GgJqZBoCaXIgheiqNSgC': offset 92452 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEdnekY1ZHNJbWNxShoCamQaAmly/streams/GgJqZBoCaXIgheiqNSgC': offset 92452 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 09, 2022 6:50:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-09T18:50:30.257Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEdnekY1ZHNJbWNxShoCamQaAmly/streams/CAMaAmpkGgJpciDAtsPJBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEdnekY1ZHNJbWNxShoCamQaAmly/streams/CAMaAmpkGgJpciDAtsPJBygC': offset 97973 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEdnekY1ZHNJbWNxShoCamQaAmly/streams/CAMaAmpkGgJpciDAtsPJBygC': offset 97973 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 09, 2022 6:50:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:50:32.448Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 09, 2022 6:50:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:50:32.604Z: Cleaning up.
    Feb 09, 2022 6:50:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:50:32.684Z: Stopping worker pool...
    Feb 09, 2022 6:53:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:53:03.995Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 09, 2022 6:53:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T18:53:04.042Z: Worker pool stopped.
    Feb 09, 2022 6:53:09 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-09_10_48_31-18247325847884750505 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): fb639dae-fcb3-4c2c-ba84-b9a55e7e30b4 and timestamp: 2022-02-09T18:53:09.503000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    11.121

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 09, 2022 6:53:09 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 5.225 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 29s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/4xiyqvqcq4w2o

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3012

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3012/display/redirect?page=changes>

Changes:

[mmack] [adhoc] Remove remaining usage of Powermock from aws2.


------------------------------------------
[...truncated 346.95 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4202cafad5fffd872da8c41d4e929cb7
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 09, 2022 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 09, 2022 12:45:14 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 09, 2022 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 09, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 09, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 09, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 09, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 09, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 09, 2022 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 09, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@625540097]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 09, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 09, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 09, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 09, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 09, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 09, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 09, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@434699861]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 09, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 09, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 09, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 09, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 09, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 09, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 09, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 09, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 09, 2022 12:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 09, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 09, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT--9wPQyO2orjaFsHLajvxft2loD9qCcqX0M-lctjvE8A.jar
    Feb 09, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3950104860529980104.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Ug6PpdeVhJ_9NQsM69vjLYGZi2Fx_SpSz3uBzo7-Hfs.jar
    Feb 09, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Feb 09, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 09, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 7087ee961bd5869de8cea1909146c841a3b35f85d64324e722b8f3451f08fda8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-cIfulhvVhp3ozqGQkUbIQaOzX4XWQyTnIrjzRR8I_ag.pb
    Feb 09, 2022 12:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 09, 2022 12:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 09, 2022 12:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 09, 2022 12:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 09, 2022 12:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-09_04_45_29-8500126716760453771?project=apache-beam-testing
    Feb 09, 2022 12:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-09_04_45_29-8500126716760453771
    Feb 09, 2022 12:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-09_04_45_29-8500126716760453771
    Feb 09, 2022 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-09T12:45:34.709Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 09, 2022 12:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:45:49.668Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 09, 2022 12:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:45:52.506Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 09, 2022 12:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:45:53.731Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 09, 2022 12:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:45:53.813Z: Expanding GroupByKey operations into optimizable parts.
    Feb 09, 2022 12:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:45:53.852Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 09, 2022 12:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:45:53.924Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 09, 2022 12:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:45:53.953Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 09, 2022 12:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:45:53.995Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 09, 2022 12:45:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:45:54.401Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 09, 2022 12:45:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:45:54.513Z: Starting 5 workers in us-central1-a...
    Feb 09, 2022 12:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:46:40.526Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 09, 2022 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:47:04.415Z: Workers have started successfully.
    Feb 09, 2022 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:47:04.444Z: Workers have started successfully.
    Feb 09, 2022 12:47:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:47:32.788Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 09, 2022 12:47:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:47:32.928Z: Cleaning up.
    Feb 09, 2022 12:47:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:47:33.009Z: Stopping worker pool...
    Feb 09, 2022 12:50:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:50:14.137Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 09, 2022 12:50:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T12:50:14.199Z: Worker pool stopped.
    Feb 09, 2022 12:50:21 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-09_04_45_29-8500126716760453771 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 015b264e-4d49-4d04-bc77-3e3bcd377118 and timestamp: 2022-02-09T12:50:21.416000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.065

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 09, 2022 12:50:21 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 5 mins 12.178 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 57s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/jizwa4acgbu34

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3011

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3011/display/redirect?page=changes>

Changes:

[Kiley Sok] Add java 17 to changes

[Daniel Oliveira] [BEAM-13732] Switch x-lang BigQueryIO expansion service to GCP one.

[noreply] [BEAM-13858] Fix broken github action on :sdks:go:examples:wordCount

[Kiley Sok] add jira for runner v2

[noreply] [BEAM-13732] Go SDK BigQuery IO wrapper. Initial implementation.

[noreply] [BEAM-13732] Add example for Go BigQuery IO wrapper. (#16786)

[noreply] Update CHANGES.md with Go SDK milestones. (#16787)

[noreply] [BEAM-13193] Allow BeamFnDataOutboundObserver to flush elements.


------------------------------------------
[...truncated 362.87 KB...]
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@290172400]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 09, 2022 6:47:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 09, 2022 6:47:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 09, 2022 6:47:23 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 09, 2022 6:47:23 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT--9wPQyO2orjaFsHLajvxft2loD9qCcqX0M-lctjvE8A.jar
    Feb 09, 2022 6:47:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1460779269768793035.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-OpkDkHj1Ke6ge2aPmv1B6zaBD7YW18_ebEz8gG6cp_M.jar
    Feb 09, 2022 6:47:24 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Feb 09, 2022 6:47:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 09, 2022 6:47:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash b6601b8fd8e1085d37e23f73cdf9edbcf206ed1d5f69268fa874963d9066cc4b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-tmAbj9jhCF034j9zzfntvPIG7R1faSaPqHSWPZBmzEs.pb
    Feb 09, 2022 6:47:27 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 09, 2022 6:47:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 09, 2022 6:47:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 09, 2022 6:47:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 09, 2022 6:47:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-08_22_47_28-12724611233422720798?project=apache-beam-testing
    Feb 09, 2022 6:47:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-08_22_47_28-12724611233422720798
    Feb 09, 2022 6:47:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-08_22_47_28-12724611233422720798
    Feb 09, 2022 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-09T06:47:31.574Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 09, 2022 6:47:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:47:47.405Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 09, 2022 6:47:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:47:48.321Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 09, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:47:48.360Z: Expanding GroupByKey operations into optimizable parts.
    Feb 09, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:47:48.387Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 09, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:47:48.460Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 09, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:47:48.490Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 09, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:47:48.520Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 09, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:47:48.883Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 09, 2022 6:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:47:48.965Z: Starting 5 workers in us-central1-a...
    Feb 09, 2022 6:47:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:47:52.191Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 09, 2022 6:48:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:48:31.401Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 09, 2022 6:48:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:48:54.913Z: Workers have started successfully.
    Feb 09, 2022 6:48:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:48:54.940Z: Workers have started successfully.
    Feb 09, 2022 6:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-09T06:49:23.810Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGJXZjNwZUg3SGhBdhoCamQaAmly/streams/CAUaAmpkGgJpciDqksqdBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGJXZjNwZUg3SGhBdhoCamQaAmly/streams/CAUaAmpkGgJpciDqksqdBigC': offset 89381 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGJXZjNwZUg3SGhBdhoCamQaAmly/streams/CAUaAmpkGgJpciDqksqdBigC': offset 89381 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 09, 2022 6:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:49:26.311Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 09, 2022 6:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:49:26.471Z: Cleaning up.
    Feb 09, 2022 6:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:49:26.551Z: Stopping worker pool...
    Feb 09, 2022 6:51:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:51:48.619Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 09, 2022 6:51:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T06:51:48.667Z: Worker pool stopped.
    Feb 09, 2022 6:51:55 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-08_22_47_28-12724611233422720798 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d36da77e-d650-45e5-b4a8-8205736d753a and timestamp: 2022-02-09T06:51:55.387000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.867

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 09, 2022 6:51:55 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 53.006 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 33s
165 actionable tasks: 110 executed, 53 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/zjcwxqnfapo2w

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3010

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3010/display/redirect?page=changes>

Changes:

[noreply] Update README.md

[marco.robles] Update README with latest PreCommit Jobs

[marco.robles] Update Postcommit jobs with latest jobs

[marco.robles] Update Performace job tests in readme

[marco.robles] update load job tests with latest updates

[marco.robles] update other jobs test with latest updates

[marco.robles] mismatch links fix

[marco.robles] update trigger phrase for some postCommit jobs

[marco.robles] correct trigger phrases in readme

[marco.robles] add pending jobs to readme

[noreply] Update README.md

[marco.robles] fix broken links in jobs & remove the invalid ones

[Kyle Weaver] Update Dataflow Python dev container images.

[noreply] [BEAM-12914] Add missing 3.9 opcodes to type inference. (#16761)

[noreply] [BEAM-13321] Initial BigQueryIO externalization. (#16489)

[noreply] [BEAM-13193] Enable process bundle response elements embedding in Java

[noreply] [BEAM-13830] added a debeziumio_expansion_addr flag to GoSDK (#16780)

[noreply] Apply spotless. (#16783)


------------------------------------------
[...truncated 364.24 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-08_16_55_24-8072908292404903778
    Feb 09, 2022 12:55:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-09T00:55:27.333Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 09, 2022 12:55:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:55:37.156Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 09, 2022 12:55:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:55:37.781Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 09, 2022 12:55:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:55:37.818Z: Expanding GroupByKey operations into optimizable parts.
    Feb 09, 2022 12:55:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:55:37.878Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 09, 2022 12:55:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:55:37.960Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 09, 2022 12:55:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:55:37.995Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 09, 2022 12:55:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:55:38.043Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 09, 2022 12:55:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:55:38.645Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 09, 2022 12:55:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:55:38.720Z: Starting 5 workers in us-central1-c...
    Feb 09, 2022 12:56:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:56:00.823Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 09, 2022 12:56:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:56:23.446Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 09, 2022 12:56:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:56:48.519Z: Workers have started successfully.
    Feb 09, 2022 12:56:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:56:48.554Z: Workers have started successfully.
    Feb 09, 2022 12:57:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-09T00:57:20.014Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGRmZmhRcTJJOHFjOBoCamQaAmly/streams/CAYaAmpkGgJpciChmfrcBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGRmZmhRcTJJOHFjOBoCamQaAmly/streams/CAYaAmpkGgJpciChmfrcBygC': offset 96847 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGRmZmhRcTJJOHFjOBoCamQaAmly/streams/CAYaAmpkGgJpciChmfrcBygC': offset 96847 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 09, 2022 12:57:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-09T00:57:20.821Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGRmZmhRcTJJOHFjOBoCamQaAmly/streams/CAUaAmpkGgJpciCS2-miBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGRmZmhRcTJJOHFjOBoCamQaAmly/streams/CAUaAmpkGgJpciCS2-miBSgC': offset 122281 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGRmZmhRcTJJOHFjOBoCamQaAmly/streams/CAUaAmpkGgJpciCS2-miBSgC': offset 122281 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 09, 2022 12:57:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-09T00:57:20.824Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGRmZmhRcTJJOHFjOBoCamQaAmly/streams/CAgaAmpkGgJpciDK7t1GKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGRmZmhRcTJJOHFjOBoCamQaAmly/streams/CAgaAmpkGgJpciDK7t1GKAI': offset 115244 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGRmZmhRcTJJOHFjOBoCamQaAmly/streams/CAgaAmpkGgJpciDK7t1GKAI': offset 115244 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 09, 2022 12:57:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:57:24.499Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 09, 2022 12:57:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:57:24.673Z: Cleaning up.
    Feb 09, 2022 12:57:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:57:24.741Z: Stopping worker pool...
    Feb 09, 2022 12:59:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:59:53.562Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 09, 2022 12:59:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-09T00:59:53.618Z: Worker pool stopped.
    Feb 09, 2022 12:59:59 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-08_16_55_24-8072908292404903778 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 28b8ae7f-2e90-4cb3-94ee-5d36c533f564 and timestamp: 2022-02-09T00:59:59.516000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.912

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 09, 2022 12:59:59 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 56.755 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 47s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/nljgc3kdmt4yg

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3009

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3009/display/redirect?page=changes>

Changes:

[mmack] [BEAM-13246] Add support for S3 Bucket Key at the object level (AWS Sdk

[Pablo Estrada] Output successful rows from BQ Streaming Inserts

[schapman] BEAM-13439 Type annotation for ptransform_fn

[noreply] [BEAM-13606] Fail bundles with failed BigTable mutations (#16751)


------------------------------------------
[...truncated 365.46 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-08_10_45_42-5376111834750232800
    Feb 08, 2022 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-08T18:45:45.993Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 08, 2022 6:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:45:52.820Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 08, 2022 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:45:53.487Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 08, 2022 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:45:53.551Z: Expanding GroupByKey operations into optimizable parts.
    Feb 08, 2022 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:45:53.581Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 08, 2022 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:45:53.653Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 08, 2022 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:45:53.670Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 08, 2022 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:45:53.701Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 08, 2022 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:45:54.035Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 08, 2022 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:45:54.114Z: Starting 5 workers in us-central1-f...
    Feb 08, 2022 6:46:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:46:08.150Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 08, 2022 6:46:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:46:40.731Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 08, 2022 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:47:03.490Z: Workers have started successfully.
    Feb 08, 2022 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:47:03.552Z: Workers have started successfully.
    Feb 08, 2022 6:47:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-08T18:47:34.092Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEtJQU1XT0Q5Y1ZuZBoCamQaAmly/streams/CAUaAmpkGgJpciC3zcLjAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEtJQU1XT0Q5Y1ZuZBoCamQaAmly/streams/CAUaAmpkGgJpciC3zcLjAigC': offset 128619 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEtJQU1XT0Q5Y1ZuZBoCamQaAmly/streams/CAUaAmpkGgJpciC3zcLjAigC': offset 128619 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 08, 2022 6:47:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-08T18:47:34.217Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEtJQU1XT0Q5Y1ZuZBoCamQaAmly/streams/CAMaAmpkGgJpciCKocqwBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEtJQU1XT0Q5Y1ZuZBoCamQaAmly/streams/CAMaAmpkGgJpciCKocqwBigC': offset 80422 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEtJQU1XT0Q5Y1ZuZBoCamQaAmly/streams/CAMaAmpkGgJpciCKocqwBigC': offset 80422 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 08, 2022 6:47:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-08T18:47:35.238Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEtJQU1XT0Q5Y1ZuZBoCamQaAmly/streams/CAcaAmpkGgJpciCx58y2BSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEtJQU1XT0Q5Y1ZuZBoCamQaAmly/streams/CAcaAmpkGgJpciCx58y2BSgC': offset 78055 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEtJQU1XT0Q5Y1ZuZBoCamQaAmly/streams/CAcaAmpkGgJpciCx58y2BSgC': offset 78055 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 08, 2022 6:47:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:47:37.421Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 08, 2022 6:47:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:47:37.578Z: Cleaning up.
    Feb 08, 2022 6:47:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:47:37.645Z: Stopping worker pool...
    Feb 08, 2022 6:50:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:50:04.875Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 08, 2022 6:50:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T18:50:05.179Z: Worker pool stopped.
    Feb 08, 2022 6:50:13 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-08_10_45_42-5376111834750232800 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 87e58e9e-7a4c-4bbe-8300-d757db627153 and timestamp: 2022-02-08T18:50:13.158000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.353

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 08, 2022 6:50:13 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 50.168 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 51s
165 actionable tasks: 105 executed, 58 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ka5dkfskdxal2

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3008

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3008/display/redirect?page=changes>

Changes:

[Ismaël Mejía] [BEAM-13839] Upgrade zstd-jni to version 1.5.2-1

[mmack] [BEAM-13840] Fix usage of legacy rawtypes in AWS modules


------------------------------------------
[...truncated 369.78 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-08_04_57_26-4011005898348384124
    Feb 08, 2022 12:57:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-08T12:57:31.112Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 08, 2022 12:57:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:57:39.554Z: Worker configuration: e2-standard-2 in us-central1-b.
    Feb 08, 2022 12:57:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:57:40.300Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 08, 2022 12:57:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:57:40.341Z: Expanding GroupByKey operations into optimizable parts.
    Feb 08, 2022 12:57:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:57:40.375Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 08, 2022 12:57:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:57:40.454Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 08, 2022 12:57:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:57:40.488Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 08, 2022 12:57:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:57:40.522Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 08, 2022 12:57:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:57:40.931Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 08, 2022 12:57:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:57:41.016Z: Starting 5 workers in us-central1-b...
    Feb 08, 2022 12:58:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:58:05.171Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 08, 2022 12:58:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:58:24.525Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 08, 2022 12:58:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:58:49.963Z: Workers have started successfully.
    Feb 08, 2022 12:58:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:58:49.999Z: Workers have started successfully.
    Feb 08, 2022 12:59:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-08T12:59:18.593Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDNnZm9OUjBLUFE0RBoCamQaAmly/streams/CAQaAmpkGgJpciDF759mKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDNnZm9OUjBLUFE0RBoCamQaAmly/streams/CAQaAmpkGgJpciDF759mKAI': offset 74689 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDNnZm9OUjBLUFE0RBoCamQaAmly/streams/CAQaAmpkGgJpciDF759mKAI': offset 74689 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 08, 2022 12:59:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-08T12:59:18.610Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDNnZm9OUjBLUFE0RBoCamQaAmly/streams/CAYaAmpkGgJpciDxyZShBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDNnZm9OUjBLUFE0RBoCamQaAmly/streams/CAYaAmpkGgJpciDxyZShBygC': offset 100521 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDNnZm9OUjBLUFE0RBoCamQaAmly/streams/CAYaAmpkGgJpciDxyZShBygC': offset 100521 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 08, 2022 12:59:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-08T12:59:18.704Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDNnZm9OUjBLUFE0RBoCamQaAmly/streams/CAcaAmpkGgJpciCB5f2IBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDNnZm9OUjBLUFE0RBoCamQaAmly/streams/CAcaAmpkGgJpciCB5f2IBSgC': offset 123130 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDNnZm9OUjBLUFE0RBoCamQaAmly/streams/CAcaAmpkGgJpciCB5f2IBSgC': offset 123130 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 08, 2022 12:59:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:59:22.688Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 08, 2022 12:59:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:59:22.905Z: Cleaning up.
    Feb 08, 2022 12:59:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T12:59:23.019Z: Stopping worker pool...
    Feb 08, 2022 1:01:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T13:01:54.397Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 08, 2022 1:01:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T13:01:54.533Z: Worker pool stopped.
    Feb 08, 2022 1:02:01 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-08_04_57_26-4011005898348384124 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3bf414c0-8106-4ab7-b80e-357028ed8e8a and timestamp: 2022-02-08T13:02:02.036000000Z:
                     Metric:                    Value:
                   read_time                    11.023
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 08, 2022 1:02:02 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 57.079 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 53s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/clbjviam7af7m

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3007

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3007/display/redirect?page=changes>

Changes:

[noreply] [release-23.6.0] Fix JIRA link for 2.36 blog (#16771)

[noreply] [BEAM-13647] Use role for Go worker binary. (#16729)


------------------------------------------
[...truncated 372.85 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-07_22_46_36-18220132122182029039
    Feb 08, 2022 6:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-08T06:46:39.749Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 08, 2022 6:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:46:49.141Z: Worker configuration: e2-standard-2 in us-central1-b.
    Feb 08, 2022 6:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:46:49.869Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 08, 2022 6:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:46:49.921Z: Expanding GroupByKey operations into optimizable parts.
    Feb 08, 2022 6:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:46:49.960Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 08, 2022 6:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:46:50.012Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 08, 2022 6:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:46:50.039Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 08, 2022 6:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:46:50.072Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 08, 2022 6:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:46:50.724Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 08, 2022 6:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:46:50.805Z: Starting 5 workers in us-central1-b...
    Feb 08, 2022 6:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:47:17.492Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 08, 2022 6:47:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:47:37.555Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 08, 2022 6:48:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:48:04.429Z: Workers have started successfully.
    Feb 08, 2022 6:48:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:48:04.455Z: Workers have started successfully.
    Feb 08, 2022 6:48:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-08T06:48:35.545Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFp2b2VnQjdnYjN3bRoCamQaAmly/streams/CAkaAmpkGgJpciDanqSHAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFp2b2VnQjdnYjN3bRoCamQaAmly/streams/CAkaAmpkGgJpciDanqSHAygC': offset 125860 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFp2b2VnQjdnYjN3bRoCamQaAmly/streams/CAkaAmpkGgJpciDanqSHAygC': offset 125860 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 08, 2022 6:48:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-08T06:48:36.457Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFp2b2VnQjdnYjN3bRoCamQaAmly/streams/CAUaAmpkGgJpciDG7qimASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFp2b2VnQjdnYjN3bRoCamQaAmly/streams/CAUaAmpkGgJpciDG7qimASgC': offset 121966 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFp2b2VnQjdnYjN3bRoCamQaAmly/streams/CAUaAmpkGgJpciDG7qimASgC': offset 121966 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 08, 2022 6:48:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-08T06:48:36.530Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFp2b2VnQjdnYjN3bRoCamQaAmly/streams/GgJqZBoCaXIgk93bnQEoAg"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFp2b2VnQjdnYjN3bRoCamQaAmly/streams/GgJqZBoCaXIgk93bnQEoAg': offset 111596 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFp2b2VnQjdnYjN3bRoCamQaAmly/streams/GgJqZBoCaXIgk93bnQEoAg': offset 111596 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 08, 2022 6:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:48:38.608Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 08, 2022 6:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:48:38.765Z: Cleaning up.
    Feb 08, 2022 6:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:48:38.852Z: Stopping worker pool...
    Feb 08, 2022 6:51:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:51:02.179Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 08, 2022 6:51:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T06:51:02.241Z: Worker pool stopped.
    Feb 08, 2022 6:51:09 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-07_22_46_36-18220132122182029039 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cadc0d14-cac5-46f1-ba5f-dc8df5070154 and timestamp: 2022-02-08T06:51:09.436000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.572

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 08, 2022 6:51:09 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 53.957 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 47s
165 actionable tasks: 109 executed, 54 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/yfp4o26cbteiq

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3006

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3006/display/redirect?page=changes>

Changes:

[Kyle Weaver] [BEAM-12976] Log projection pushdown optimizations.

[benjamin.gonzalez] [BEAM-12572] Change jobs to run as cron jobs

[alexander.zhuravlev] [BEAM-13820] Changed color of delete icon in pipeline options dropdown,

[noreply] [BEAM-13193] Aggregates fn api outbound data/timers of different

[noreply] [BEAM-13767] Migrate a bundle of grade tasks to use configuration

[noreply] Merge pull request #16653 from [BEAM-12164]: Add integration tests for

[noreply] Merge pull request #16728 from [BEAM-13823] Update docs for SnowflakeIO

[noreply] Merge pull request #16660 from [BEAM-13771][Playground] Send multifile

[noreply] Merge pull request #16646 from [BEAM-13643][Playground] Setup running

[noreply] [BEAM-13015] Add state caching benchmark and move benchmarks to their

[noreply] [BEAM-13419] Check for initialization in dataflow runner (#16765)

[noreply] Merge pull request #16701 from [BEAM-13786] [Playground] [Bugfix] Update

[noreply] Merge pull request #16754 from [BEAM-13838][Playground] Add logs in case

[noreply] [BEAM-13293] consistent naming for expansion service address and flag

[noreply] Merge pull request #16700 from [BEAM-13790][Playground] Change logic of

[noreply] [BEAM-13830] update dependency for debeziumio expansion service (#16743)

[noreply] [BEAM-13761] consistent namings for expansion address in Debezium IO

[noreply] [BEAM-13806] Shutting down SchemaIO expansion services from Go VR

[noreply] [release-2.36.0] Update website/changelog for release 2.36.0 (#16627)

[noreply] [BEAM-13848] Update numpy intersphinx link (#16767)


------------------------------------------
[...truncated 371.50 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-07_17_05_27-17489368626751156662
    Feb 08, 2022 1:05:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-08T01:05:30.506Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 08, 2022 1:05:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:05:38.169Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 08, 2022 1:05:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:05:39.123Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 08, 2022 1:05:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:05:39.163Z: Expanding GroupByKey operations into optimizable parts.
    Feb 08, 2022 1:05:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:05:39.194Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 08, 2022 1:05:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:05:39.257Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 08, 2022 1:05:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:05:39.291Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 08, 2022 1:05:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:05:39.323Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 08, 2022 1:05:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:05:39.626Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 08, 2022 1:05:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:05:39.700Z: Starting 5 workers in us-central1-f...
    Feb 08, 2022 1:06:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:06:01.333Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 08, 2022 1:06:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:06:28.203Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 08, 2022 1:06:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:06:51.460Z: Workers have started successfully.
    Feb 08, 2022 1:06:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:06:51.496Z: Workers have started successfully.
    Feb 08, 2022 1:07:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-08T01:07:21.655Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGlpaUNNUEdzelpCRxoCamQaAmly/streams/CAMaAmpkGgJpciCB7oDtBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGlpaUNNUEdzelpCRxoCamQaAmly/streams/CAMaAmpkGgJpciCB7oDtBigC': offset 66155 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGlpaUNNUEdzelpCRxoCamQaAmly/streams/CAMaAmpkGgJpciCB7oDtBigC': offset 66155 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 08, 2022 1:07:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-08T01:07:21.664Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGlpaUNNUEdzelpCRxoCamQaAmly/streams/CAgaAmpkGgJpciDPl621ASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGlpaUNNUEdzelpCRxoCamQaAmly/streams/CAgaAmpkGgJpciDPl621ASgC': offset 68788 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGlpaUNNUEdzelpCRxoCamQaAmly/streams/CAgaAmpkGgJpciDPl621ASgC': offset 68788 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 08, 2022 1:07:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-08T01:07:22.471Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGlpaUNNUEdzelpCRxoCamQaAmly/streams/CAYaAmpkGgJpciCWs5HBBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGlpaUNNUEdzelpCRxoCamQaAmly/streams/CAYaAmpkGgJpciCWs5HBBCgC': offset 71313 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGlpaUNNUEdzelpCRxoCamQaAmly/streams/CAYaAmpkGgJpciCWs5HBBCgC': offset 71313 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 08, 2022 1:07:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:07:24.545Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 08, 2022 1:07:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:07:24.695Z: Cleaning up.
    Feb 08, 2022 1:07:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:07:24.765Z: Stopping worker pool...
    Feb 08, 2022 1:09:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:09:55.648Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 08, 2022 1:09:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-08T01:09:55.702Z: Worker pool stopped.
    Feb 08, 2022 1:10:02 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-07_17_05_27-17489368626751156662 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 19adb96f-06d5-4d54-8388-3179ecb944e6 and timestamp: 2022-02-08T01:10:02.325000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.804

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 08, 2022 1:10:02 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 55.726 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 55s
165 actionable tasks: 108 executed, 55 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/5ave34zyzdyoa

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3005

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3005/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11971] Revert "Fix timer consistency in direct runner" (#16748)


------------------------------------------
[...truncated 357.67 KB...]
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 07, 2022 7:06:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 07, 2022 7:06:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 07, 2022 7:06:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 07, 2022 7:06:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 07, 2022 7:06:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 07, 2022 7:06:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 07, 2022 7:06:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1139678455]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 07, 2022 7:06:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 07, 2022 7:06:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 07, 2022 7:06:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 07, 2022 7:06:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 07, 2022 7:06:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 07, 2022 7:06:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 07, 2022 7:06:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 07, 2022 7:06:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 07, 2022 7:06:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 07, 2022 7:06:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 07, 2022 7:06:59 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-MIqoFZbRuUNbJvD4AgCsrRq8dUagT0p4lNA0hLXJHnc.jar
    Feb 07, 2022 7:07:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2411456936411460065.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-QEN7BOag2rJ7EvpQQ9Z3i_ATmq-YfV1frONhU3-Ft4Y.jar
    Feb 07, 2022 7:07:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/jakarta.xml.bind/jakarta.xml.bind-api/2.3.3/48e3b9cfc10752fba3521d6511f4165bea951801/jakarta.xml.bind-api-2.3.3.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jakarta.xml.bind-api-2.3.3-wEU59HLppt0MdoXqgtZ3KCJpq457rKLhRQDjgeDGzsU.jar
    Feb 07, 2022 7:07:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 360 files cached, 2 files newly uploaded in 2 seconds
    Feb 07, 2022 7:07:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 07, 2022 7:07:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 346a06fcba7c811fb7abc967d6c654533442281a6b438a41b852e4b02e9a536f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-NGoG_Lp8gR-3q8ln1sZUUzRCKBprQ4pBuFLksC6aU28.pb
    Feb 07, 2022 7:07:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 07, 2022 7:07:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 07, 2022 7:07:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 07, 2022 7:07:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 07, 2022 7:07:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-07_11_07_06-5889928236383113204?project=apache-beam-testing
    Feb 07, 2022 7:07:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-07_11_07_06-5889928236383113204
    Feb 07, 2022 7:07:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-07_11_07_06-5889928236383113204
    Feb 07, 2022 7:07:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-07T19:07:09.715Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 07, 2022 7:07:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:07:19.031Z: Worker configuration: e2-standard-2 in us-central1-b.
    Feb 07, 2022 7:07:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:07:19.616Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 07, 2022 7:07:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:07:19.647Z: Expanding GroupByKey operations into optimizable parts.
    Feb 07, 2022 7:07:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:07:19.673Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 07, 2022 7:07:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:07:19.747Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 07, 2022 7:07:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:07:19.773Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 07, 2022 7:07:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:07:19.806Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 07, 2022 7:07:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:07:20.126Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 07, 2022 7:07:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:07:20.204Z: Starting 5 workers in us-central1-b...
    Feb 07, 2022 7:07:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:07:28.646Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 07, 2022 7:08:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:08:05.344Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 07, 2022 7:08:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:08:32.706Z: Workers have started successfully.
    Feb 07, 2022 7:08:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:08:32.731Z: Workers have started successfully.
    Feb 07, 2022 7:09:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-07T19:09:10.138Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHAweVN2c0hYNUNDSBoCamQaAmly/streams/CAIaAmpkGgJpciC6gtrwASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHAweVN2c0hYNUNDSBoCamQaAmly/streams/CAIaAmpkGgJpciC6gtrwASgC': offset 75335 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHAweVN2c0hYNUNDSBoCamQaAmly/streams/CAIaAmpkGgJpciC6gtrwASgC': offset 75335 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 07, 2022 7:09:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:09:12.170Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 07, 2022 7:09:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:09:12.331Z: Cleaning up.
    Feb 07, 2022 7:09:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:09:12.421Z: Stopping worker pool...
    Feb 07, 2022 7:11:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:11:44.470Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 07, 2022 7:11:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T19:11:44.520Z: Worker pool stopped.
    Feb 07, 2022 7:11:50 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-07_11_07_06-5889928236383113204 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): eacab895-7e44-4916-a72d-a1c49af7aa82 and timestamp: 2022-02-07T19:11:51.124000000Z:
                     Metric:                    Value:
                   read_time                     9.686
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 07, 2022 7:11:51 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.11 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.089 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 14.294 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 21s
165 actionable tasks: 105 executed, 58 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/4dusn47mix34i

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3004

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3004/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #16726 from [BEAM-12164]: Parses change streams

[mmack] [BEAM-13147] Avoid nullness issue during init of AwsModule (AWS Sdk v2)


------------------------------------------
[...truncated 360.32 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ce1e6f863a0cb3de514ffceb4a604b0a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 07, 2022 2:48:39 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 07, 2022 2:48:40 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 07, 2022 2:48:40 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 07, 2022 2:48:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 07, 2022 2:48:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 07, 2022 2:48:44 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 07, 2022 2:48:44 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 07, 2022 2:48:44 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 07, 2022 2:48:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 07, 2022 2:48:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@625540097]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 07, 2022 2:48:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 07, 2022 2:48:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 07, 2022 2:48:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 07, 2022 2:48:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 07, 2022 2:48:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 07, 2022 2:48:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 07, 2022 2:48:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@290172400]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 07, 2022 2:48:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 07, 2022 2:48:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 07, 2022 2:48:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 07, 2022 2:48:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 07, 2022 2:48:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 07, 2022 2:48:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 07, 2022 2:48:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 07, 2022 2:48:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 07, 2022 2:48:47 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 07, 2022 2:48:51 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 07, 2022 2:48:51 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-dAZAKYsKDdHYqUEAs1vt_xUSK2vvGnzYKpd5E0M2YuI.jar
    Feb 07, 2022 2:48:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1120642171443572466.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-WhZURXwgV-jTa0KIGDkjp1LylyAhS4Lm_DjcFM0r3NM.jar
    Feb 07, 2022 2:48:52 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Feb 07, 2022 2:48:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 07, 2022 2:48:52 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash dd05b12dd7d36bd37db566706b0d135db209798ab7939300496e5fad0ce4194f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-3QWxLdfTa9N9tWZwaw0TXbIJeYq3k5MASW5frQzkGU8.pb
    Feb 07, 2022 2:48:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 07, 2022 2:48:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 07, 2022 2:48:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 07, 2022 2:48:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 07, 2022 2:48:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-07_06_48_55-6011341454757342391?project=apache-beam-testing
    Feb 07, 2022 2:48:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-07_06_48_55-6011341454757342391
    Feb 07, 2022 2:48:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-07_06_48_55-6011341454757342391
    Feb 07, 2022 2:49:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-07T14:48:59.182Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 07, 2022 2:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:49:10.093Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 07, 2022 2:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:49:10.939Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 07, 2022 2:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:49:10.997Z: Expanding GroupByKey operations into optimizable parts.
    Feb 07, 2022 2:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:49:11.037Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 07, 2022 2:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:49:11.122Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 07, 2022 2:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:49:11.160Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 07, 2022 2:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:49:11.204Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 07, 2022 2:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:49:11.640Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 07, 2022 2:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:49:11.721Z: Starting 5 workers in us-central1-a...
    Feb 07, 2022 2:49:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:49:33.897Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 07, 2022 2:49:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:49:57.335Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 07, 2022 2:50:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:50:20.363Z: Workers have started successfully.
    Feb 07, 2022 2:50:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:50:20.442Z: Workers have started successfully.
    Feb 07, 2022 2:50:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:50:47.332Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 07, 2022 2:50:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:50:47.492Z: Cleaning up.
    Feb 07, 2022 2:50:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:50:47.587Z: Stopping worker pool...
    Feb 07, 2022 2:53:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:53:19.375Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 07, 2022 2:53:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-07T14:53:19.416Z: Worker pool stopped.
    Feb 07, 2022 2:53:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-07_06_48_55-6011341454757342391 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f009fb8d-34f0-4912-a13e-fa7b34b5963c and timestamp: 2022-02-07T14:53:27.137000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.049

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 07, 2022 2:53:27 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 52.394 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 33s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/uihi5rodngd4q

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3003

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3003/display/redirect>

Changes:


------------------------------------------
[...truncated 362.90 KB...]
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-05_22_45_09-13847894858108236533
    Feb 06, 2022 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-06T06:45:12.835Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 06, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:45:20.468Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 06, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:45:21.200Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 06, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:45:21.241Z: Expanding GroupByKey operations into optimizable parts.
    Feb 06, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:45:21.277Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 06, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:45:21.344Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 06, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:45:21.410Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 06, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:45:21.446Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 06, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:45:21.835Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 06, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:45:21.914Z: Starting 5 workers in us-central1-c...
    Feb 06, 2022 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:45:45.394Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 06, 2022 6:46:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:46:06.435Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 06, 2022 6:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:46:31.926Z: Workers have started successfully.
    Feb 06, 2022 6:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:46:31.953Z: Workers have started successfully.
    Feb 06, 2022 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-06T06:47:01.449Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGcxMGtESllCMDFuShoCamQaAmly/streams/CAEaAmpkGgJpciDwl4qRASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGcxMGtESllCMDFuShoCamQaAmly/streams/CAEaAmpkGgJpciDwl4qRASgC': offset 122861 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGcxMGtESllCMDFuShoCamQaAmly/streams/CAEaAmpkGgJpciDwl4qRASgC': offset 122861 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 06, 2022 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-06T06:47:02.716Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGcxMGtESllCMDFuShoCamQaAmly/streams/CAcaAmpkGgJpciDKuvsYKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGcxMGtESllCMDFuShoCamQaAmly/streams/CAcaAmpkGgJpciDKuvsYKAI': offset 110782 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGcxMGtESllCMDFuShoCamQaAmly/streams/CAcaAmpkGgJpciDKuvsYKAI': offset 110782 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 06, 2022 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-06T06:47:02.875Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGcxMGtESllCMDFuShoCamQaAmly/streams/GgJqZBoCaXIg2didpQYoAg"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGcxMGtESllCMDFuShoCamQaAmly/streams/GgJqZBoCaXIg2didpQYoAg': offset 64634 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGcxMGtESllCMDFuShoCamQaAmly/streams/GgJqZBoCaXIg2didpQYoAg': offset 64634 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 06, 2022 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:47:05.568Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 06, 2022 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:47:05.711Z: Cleaning up.
    Feb 06, 2022 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:47:05.781Z: Stopping worker pool...
    Feb 06, 2022 6:49:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:49:34.168Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 06, 2022 6:49:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T06:49:34.207Z: Worker pool stopped.
    Feb 06, 2022 6:49:41 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-05_22_45_09-13847894858108236533 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 62ac2536-8825-4b50-847b-c17c3c99616f and timestamp: 2022-02-06T06:49:41.858000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.268

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 06, 2022 6:49:41 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 51.232 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/5z3s3t266s5do

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3002

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3002/display/redirect>

Changes:


------------------------------------------
[...truncated 348.10 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 333af01f4949428569159fac052e99a8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 06, 2022 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 06, 2022 12:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 06, 2022 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 06, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 06, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 06, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@625540097]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@434699861]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 06, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 06, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 06, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 06, 2022 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 06, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 06, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-dAZAKYsKDdHYqUEAs1vt_xUSK2vvGnzYKpd5E0M2YuI.jar
    Feb 06, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6422248096619617013.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-QBdpW0VISTPRzkL7jeUnwMGOUZAHRw7J7Jkz3fjjLhc.jar
    Feb 06, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Feb 06, 2022 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 06, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 0d4d1119357934a717ed817ae0264ba7b178f0b5e714f423b4a842179ba4a4bc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-DU0RGTV5NKcX7YF64CZLp7F48LXnFPQjtKhCF5ukpLw.pb
    Feb 06, 2022 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 06, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 06, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 06, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 06, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-05_16_45_06-17558250428308116752?project=apache-beam-testing
    Feb 06, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-05_16_45_06-17558250428308116752
    Feb 06, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-05_16_45_06-17558250428308116752
    Feb 06, 2022 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-06T00:45:09.262Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 06, 2022 12:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:45:20.194Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 06, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:45:20.850Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 06, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:45:20.892Z: Expanding GroupByKey operations into optimizable parts.
    Feb 06, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:45:20.921Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 06, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:45:21.017Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 06, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:45:21.071Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 06, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:45:21.122Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 06, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:45:21.511Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 06, 2022 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:45:21.609Z: Starting 5 workers in us-central1-a...
    Feb 06, 2022 12:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:45:43.513Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 06, 2022 12:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:46:07.065Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 06, 2022 12:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:46:29.885Z: Workers have started successfully.
    Feb 06, 2022 12:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:46:29.924Z: Workers have started successfully.
    Feb 06, 2022 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:46:56.552Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 06, 2022 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:46:56.703Z: Cleaning up.
    Feb 06, 2022 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:46:56.787Z: Stopping worker pool...
    Feb 06, 2022 12:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:49:22.070Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 06, 2022 12:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-06T00:49:22.111Z: Worker pool stopped.
    Feb 06, 2022 12:49:28 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-05_16_45_06-17558250428308116752 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c7fabee7-8975-4793-bb95-a54bcc93134c and timestamp: 2022-02-06T00:49:28.798000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.507

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 06, 2022 12:49:28 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 41.572 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 8s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ogqyjvstr4mmq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3001

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3001/display/redirect>

Changes:


------------------------------------------
[...truncated 357.79 KB...]
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 05, 2022 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 05, 2022 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 05, 2022 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 05, 2022 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 05, 2022 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 05, 2022 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 05, 2022 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 05, 2022 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 05, 2022 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 05, 2022 6:45:16 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 05, 2022 6:45:16 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-dAZAKYsKDdHYqUEAs1vt_xUSK2vvGnzYKpd5E0M2YuI.jar
    Feb 05, 2022 6:45:16 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3953516968182962782.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ygNO0HXZIh0QZ8MiX-agWKc08uB1P-dY583V1ZgxX54.jar
    Feb 05, 2022 6:45:16 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Feb 05, 2022 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 05, 2022 6:45:16 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142350 bytes, hash 94bb4f4e9d79e30b9afd21eaa7810ca4958b4f1076f27e38a9d4b0454c32c5a4> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-lLtPTp154wua_SHqp4EMpJWLTxB28n44qdSwRUwyxaQ.pb
    Feb 05, 2022 6:45:20 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 05, 2022 6:45:20 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 05, 2022 6:45:20 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 05, 2022 6:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 05, 2022 6:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-05_10_45_20-4554258308699327719?project=apache-beam-testing
    Feb 05, 2022 6:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-05_10_45_20-4554258308699327719
    Feb 05, 2022 6:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-05_10_45_20-4554258308699327719
    Feb 05, 2022 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-05T18:45:23.708Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 05, 2022 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:45:32.597Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 05, 2022 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:45:33.409Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 05, 2022 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:45:33.451Z: Expanding GroupByKey operations into optimizable parts.
    Feb 05, 2022 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:45:33.476Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 05, 2022 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:45:33.563Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 05, 2022 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:45:33.592Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 05, 2022 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:45:33.619Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 05, 2022 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:45:33.960Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 05, 2022 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:45:34.049Z: Starting 5 workers in us-central1-c...
    Feb 05, 2022 6:45:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:45:51.128Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 05, 2022 6:46:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:46:26.409Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 05, 2022 6:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:46:52.906Z: Workers have started successfully.
    Feb 05, 2022 6:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:46:52.947Z: Workers have started successfully.
    Feb 05, 2022 6:47:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-05T18:47:24.468Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGU1R2tvZkxneWZUMxoCamQaAmly/streams/CAUaAmpkGgJpciDsp8WbBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGU1R2tvZkxneWZUMxoCamQaAmly/streams/CAUaAmpkGgJpciDsp8WbBygC': offset 128946 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGU1R2tvZkxneWZUMxoCamQaAmly/streams/CAUaAmpkGgJpciDsp8WbBygC': offset 128946 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 05, 2022 6:47:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-05T18:47:24.472Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGU1R2tvZkxneWZUMxoCamQaAmly/streams/CAgaAmpkGgJpciDfzuf7AygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGU1R2tvZkxneWZUMxoCamQaAmly/streams/CAgaAmpkGgJpciDfzuf7AygC': offset 84089 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGU1R2tvZkxneWZUMxoCamQaAmly/streams/CAgaAmpkGgJpciDfzuf7AygC': offset 84089 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 05, 2022 6:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:47:26.760Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 05, 2022 6:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:47:26.913Z: Cleaning up.
    Feb 05, 2022 6:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:47:27.006Z: Stopping worker pool...
    Feb 05, 2022 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:49:54.431Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 05, 2022 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T18:49:54.501Z: Worker pool stopped.
    Feb 05, 2022 6:50:01 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-05_10_45_20-4554258308699327719 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f27c1ff2-8aa4-49f6-bc8e-e66eabf7c661 and timestamp: 2022-02-05T18:50:01.699000000Z:
                     Metric:                    Value:
                 fields_read                 4630540.0
                   read_time                     9.004

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 05, 2022 6:50:01 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 5 mins 6.082 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/dw5hcrotxivfs

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #3000

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3000/display/redirect>

Changes:


------------------------------------------
[...truncated 350.19 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 333af01f4949428569159fac052e99a8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 05, 2022 12:57:27 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 05, 2022 12:57:28 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 05, 2022 12:57:29 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 05, 2022 12:57:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 05, 2022 12:57:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 05, 2022 12:57:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 05, 2022 12:57:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 05, 2022 12:57:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 05, 2022 12:57:33 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 05, 2022 12:57:33 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@625540097]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@434699861]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 05, 2022 12:57:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 05, 2022 12:57:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 05, 2022 12:57:41 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 05, 2022 12:57:41 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-dAZAKYsKDdHYqUEAs1vt_xUSK2vvGnzYKpd5E0M2YuI.jar
    Feb 05, 2022 12:57:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6659570394851378177.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ps7YqbSzzMPDMXdlrOjR8WqOvNG1qt0X2MIsI9AYtYw.jar
    Feb 05, 2022 12:57:41 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Feb 05, 2022 12:57:41 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 05, 2022 12:57:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash b45aca3cd80c385491e678ffecf97b04967c25867e7a81ca38a513ad0d0554d3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-tFrKPNgMOFSR5nj_7Pl7BJZ8JYZ-eoHKOKUTrQ0FVNM.pb
    Feb 05, 2022 12:57:44 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 05, 2022 12:57:45 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 05, 2022 12:57:45 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 05, 2022 12:57:45 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 05, 2022 12:57:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-05_04_57_45-2668097208432679340?project=apache-beam-testing
    Feb 05, 2022 12:57:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-05_04_57_45-2668097208432679340
    Feb 05, 2022 12:57:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-05_04_57_45-2668097208432679340
    Feb 05, 2022 12:57:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-05T12:57:48.584Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 05, 2022 12:57:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:57:59.021Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 05, 2022 12:58:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:57:59.801Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 05, 2022 12:58:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:57:59.839Z: Expanding GroupByKey operations into optimizable parts.
    Feb 05, 2022 12:58:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:57:59.874Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 05, 2022 12:58:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:57:59.926Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 05, 2022 12:58:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:57:59.947Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 05, 2022 12:58:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:58:00.010Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 05, 2022 12:58:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:58:00.353Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 05, 2022 12:58:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:58:00.434Z: Starting 5 workers in us-central1-f...
    Feb 05, 2022 12:58:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:58:12.933Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 05, 2022 12:58:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:58:45.177Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 05, 2022 12:59:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:59:12.401Z: Workers have started successfully.
    Feb 05, 2022 12:59:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:59:12.436Z: Workers have started successfully.
    Feb 05, 2022 12:59:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:59:42.799Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 05, 2022 12:59:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:59:42.930Z: Cleaning up.
    Feb 05, 2022 12:59:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T12:59:43.035Z: Stopping worker pool...
    Feb 05, 2022 1:02:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T13:02:15.180Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 05, 2022 1:02:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T13:02:15.216Z: Worker pool stopped.
    Feb 05, 2022 1:02:32 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-05_04_57_45-2668097208432679340 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 27ae034d-6184-473d-ae21-8b5ebb3483c9 and timestamp: 2022-02-05T13:02:32.644000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.233

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 05, 2022 1:02:32 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 5 mins 8.589 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 18s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/sun6behghigio

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2999

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2999/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13799] Created a Dataproc cluster manager for Interactive Beam

[noreply] Merge pull request #16727: [BEAM-11971] remove unsafe Concurrent data


------------------------------------------
[...truncated 373.49 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-04_22_46_33-8344189500839938247
    Feb 05, 2022 6:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-05T06:46:37.045Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 05, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:46:46.698Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 05, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:46:47.564Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 05, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:46:47.603Z: Expanding GroupByKey operations into optimizable parts.
    Feb 05, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:46:47.630Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 05, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:46:47.692Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 05, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:46:47.735Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 05, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:46:47.763Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 05, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:46:48.127Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 05, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:46:48.193Z: Starting 5 workers in us-central1-f...
    Feb 05, 2022 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:47:02.146Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 05, 2022 6:47:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:47:33.964Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 05, 2022 6:48:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:47:58.511Z: Workers have started successfully.
    Feb 05, 2022 6:48:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:47:58.546Z: Workers have started successfully.
    Feb 05, 2022 6:48:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-05T06:48:29.009Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHM5Sk9falBENExsbxoCamQaAmly/streams/CAcaAmpkGgJpciD6qe2lBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHM5Sk9falBENExsbxoCamQaAmly/streams/CAcaAmpkGgJpciD6qe2lBygC': offset 110443 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHM5Sk9falBENExsbxoCamQaAmly/streams/CAcaAmpkGgJpciD6qe2lBygC': offset 110443 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 05, 2022 6:48:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-05T06:48:29.552Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHM5Sk9falBENExsbxoCamQaAmly/streams/CAQaAmpkGgJpciDz0NoNKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHM5Sk9falBENExsbxoCamQaAmly/streams/CAQaAmpkGgJpciDz0NoNKAI': offset 89086 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHM5Sk9falBENExsbxoCamQaAmly/streams/CAQaAmpkGgJpciDz0NoNKAI': offset 89086 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 05, 2022 6:48:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-05T06:48:29.563Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHM5Sk9falBENExsbxoCamQaAmly/streams/CAEaAmpkGgJpciCw_cagBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHM5Sk9falBENExsbxoCamQaAmly/streams/CAEaAmpkGgJpciCw_cagBygC': offset 90077 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHM5Sk9falBENExsbxoCamQaAmly/streams/CAEaAmpkGgJpciCw_cagBygC': offset 90077 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 05, 2022 6:48:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:48:32.394Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 05, 2022 6:48:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:48:32.530Z: Cleaning up.
    Feb 05, 2022 6:48:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:48:32.598Z: Stopping worker pool...
    Feb 05, 2022 6:50:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:50:58.204Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 05, 2022 6:50:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T06:50:58.246Z: Worker pool stopped.
    Feb 05, 2022 6:51:06 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-04_22_46_33-8344189500839938247 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5e739f2b-b28d-4b21-a577-897450bcbf2f and timestamp: 2022-02-05T06:51:06.312000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.875

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 05, 2022 6:51:06 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 52.352 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 47s
165 actionable tasks: 108 executed, 55 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/wehq2ks6nytjo

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2998

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2998/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13811] Fix save_main_session arg in tests examples (#16709)

[Kiley Sok] Update beam-master version

[noreply] [BEAM-13015] Calculate exception for closing BeamFnDataInboundObserver2

[noreply] Minor doc tweaks for validating vendoring. (#16747)

[noreply] [BEAM-13686] OOM while logging a large pipeline even when logging level

[noreply] [BEAM-13629] Update URL artifact type for Dataflow Go (#16490)

[noreply] [BEAM-13832] Add automated expansion service start-up to JDBCio (#16739)

[noreply] [BEAM-13831] Add automated expansion service infra into Debezium Read()

[noreply] [BEAM-13821] Add automated expansion service start-up to KafkaIO


------------------------------------------
[...truncated 360.07 KB...]
    Feb 05, 2022 1:08:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 05, 2022 1:08:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 05, 2022 1:08:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 05, 2022 1:08:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 05, 2022 1:08:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 05, 2022 1:08:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 05, 2022 1:08:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 05, 2022 1:08:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 05, 2022 1:08:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 05, 2022 1:08:32 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 05, 2022 1:08:32 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-dAZAKYsKDdHYqUEAs1vt_xUSK2vvGnzYKpd5E0M2YuI.jar
    Feb 05, 2022 1:08:32 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4617004291168804062.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Sq8V7Fk-Dv-olc0OrReE_ZL4lIpwtGyhWudkw-oferM.jar
    Feb 05, 2022 1:08:33 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Feb 05, 2022 1:08:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 05, 2022 1:08:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 78609278b29075556780578d611de21ebc58f05273629722f118f21bd202e729> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-eGCSeLKQdVVngFeNYR3iHrxY8FJzYpci8RjyG9IC5yk.pb
    Feb 05, 2022 1:08:36 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 05, 2022 1:08:36 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 05, 2022 1:08:36 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 05, 2022 1:08:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 05, 2022 1:08:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-04_17_08_36-4365359837609646089?project=apache-beam-testing
    Feb 05, 2022 1:08:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-04_17_08_36-4365359837609646089
    Feb 05, 2022 1:08:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-04_17_08_36-4365359837609646089
    Feb 05, 2022 1:08:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-05T01:08:39.888Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 05, 2022 1:08:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:08:49.009Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 05, 2022 1:08:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:08:49.638Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 05, 2022 1:08:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:08:49.702Z: Expanding GroupByKey operations into optimizable parts.
    Feb 05, 2022 1:08:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:08:49.730Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 05, 2022 1:08:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:08:49.806Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 05, 2022 1:08:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:08:49.845Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 05, 2022 1:08:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:08:49.968Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 05, 2022 1:08:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:08:50.458Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 05, 2022 1:08:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:08:50.538Z: Starting 5 workers in us-central1-f...
    Feb 05, 2022 1:09:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:09:03.688Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 05, 2022 1:09:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:09:35.756Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 05, 2022 1:10:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:10:02.191Z: Workers have started successfully.
    Feb 05, 2022 1:10:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:10:02.243Z: Workers have started successfully.
    Feb 05, 2022 1:10:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-05T01:10:34.528Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFRUYkFNcVdwZHZRVBoCamQaAmly/streams/CAEaAmpkGgJpciDR1LnWBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFRUYkFNcVdwZHZRVBoCamQaAmly/streams/CAEaAmpkGgJpciDR1LnWBygC': offset 108213 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFRUYkFNcVdwZHZRVBoCamQaAmly/streams/CAEaAmpkGgJpciDR1LnWBygC': offset 108213 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 05, 2022 1:10:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-05T01:10:34.537Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFRUYkFNcVdwZHZRVBoCamQaAmly/streams/CAUaAmpkGgJpciDqyMilAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFRUYkFNcVdwZHZRVBoCamQaAmly/streams/CAUaAmpkGgJpciDqyMilAygC': offset 101610 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFRUYkFNcVdwZHZRVBoCamQaAmly/streams/CAUaAmpkGgJpciDqyMilAygC': offset 101610 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 05, 2022 1:10:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:10:37.024Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 05, 2022 1:10:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:10:37.180Z: Cleaning up.
    Feb 05, 2022 1:10:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:10:37.311Z: Stopping worker pool...
    Feb 05, 2022 1:13:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:13:00.839Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 05, 2022 1:13:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-05T01:13:00.889Z: Worker pool stopped.
    Feb 05, 2022 1:13:06 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-04_17_08_36-4365359837609646089 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 43c8eb16-bb08-46db-a8a4-ab4c8becae05 and timestamp: 2022-02-05T01:13:06.816000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.554

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 05, 2022 1:13:06 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 49.808 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 33s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/zoglllvrxniz2

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2997

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2997/display/redirect?page=changes>

Changes:

[mmack] [BEAM-13663] Remove unused duplicate option for AWS client configuration

[mmack] [BEAM-13203] Deprecate SnsIO.writeAsync for AWS Sdk v2 due to risk of

[noreply] [BEAM-13828] Fix stale bot (#16734)

[noreply] Merge pull request #16364 from [BEAM-13182]  Add diagrams to backend


------------------------------------------
[...truncated 358.21 KB...]
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 04, 2022 6:49:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 04, 2022 6:49:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 04, 2022 6:49:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 04, 2022 6:49:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 04, 2022 6:49:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 04, 2022 6:49:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 04, 2022 6:49:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 04, 2022 6:49:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 04, 2022 6:50:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 04, 2022 6:50:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 04, 2022 6:50:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-H6a6pO_uYSIX9GgLf0Gh2wY5tC_UJVqnErEFXS5_jtE.jar
    Feb 04, 2022 6:50:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4452554014524432396.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-jIu4QWILEfzq82wZhgSfkvftKDkiX86pMg2yscWdEXY.jar
    Feb 04, 2022 6:50:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Feb 04, 2022 6:50:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 04, 2022 6:50:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 8519e840dbf75d22dd34345f6899390d6f682bd31550cfc6785927e2abe0c510> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-hRnoQNv3XSLdNDRfaJk5DW9oK9MVUM_GeFkn4qvgxRA.pb
    Feb 04, 2022 6:50:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 04, 2022 6:50:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 04, 2022 6:50:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 04, 2022 6:50:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 04, 2022 6:50:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-04_10_50_10-3837366101783054802?project=apache-beam-testing
    Feb 04, 2022 6:50:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-04_10_50_10-3837366101783054802
    Feb 04, 2022 6:50:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-04_10_50_10-3837366101783054802
    Feb 04, 2022 6:50:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-04T18:50:14.275Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 04, 2022 6:50:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:50:25.726Z: Worker configuration: e2-standard-2 in us-central1-b.
    Feb 04, 2022 6:50:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:50:26.359Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 04, 2022 6:50:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:50:26.399Z: Expanding GroupByKey operations into optimizable parts.
    Feb 04, 2022 6:50:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:50:26.440Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 04, 2022 6:50:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:50:26.505Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 04, 2022 6:50:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:50:26.557Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 04, 2022 6:50:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:50:26.594Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 04, 2022 6:50:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:50:26.941Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 04, 2022 6:50:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:50:27.020Z: Starting 5 workers in us-central1-b...
    Feb 04, 2022 6:50:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:50:56.590Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 04, 2022 6:51:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:51:16.300Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 04, 2022 6:51:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:51:42.997Z: Workers have started successfully.
    Feb 04, 2022 6:51:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:51:43.035Z: Workers have started successfully.
    Feb 04, 2022 6:52:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-04T18:52:16.124Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFB3bTBMUXdPb0w3TBoCamQaAmly/streams/CAQaAmpkGgJpciCWq-miBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFB3bTBMUXdPb0w3TBoCamQaAmly/streams/CAQaAmpkGgJpciCWq-miBCgC': offset 72563 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFB3bTBMUXdPb0w3TBoCamQaAmly/streams/CAQaAmpkGgJpciCWq-miBCgC': offset 72563 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 04, 2022 6:52:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-04T18:52:16.291Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFB3bTBMUXdPb0w3TBoCamQaAmly/streams/CAEaAmpkGgJpciD0rq-UBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFB3bTBMUXdPb0w3TBoCamQaAmly/streams/CAEaAmpkGgJpciD0rq-UBSgC': offset 90978 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFB3bTBMUXdPb0w3TBoCamQaAmly/streams/CAEaAmpkGgJpciD0rq-UBSgC': offset 90978 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 04, 2022 6:52:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:52:20.515Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 04, 2022 6:52:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:52:20.671Z: Cleaning up.
    Feb 04, 2022 6:52:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:52:20.759Z: Stopping worker pool...
    Feb 04, 2022 6:54:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:54:52.106Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 04, 2022 6:54:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T18:54:52.159Z: Worker pool stopped.
    Feb 04, 2022 6:54:59 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-04_10_50_10-3837366101783054802 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d18c54c0-3500-45d5-95e3-5b1cb1a98a45 and timestamp: 2022-02-04T18:54:59.077000000Z:
                     Metric:                    Value:
                   read_time                    10.844
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 04, 2022 6:54:59 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 5 mins 10.896 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 52s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/vkv7tzzildlm2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2996

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2996/display/redirect?page=changes>

Changes:

[alexander.chermenin] Fixed CSS for Case study page


------------------------------------------
[...truncated 361.09 KB...]
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-04_04_44_53-6285543913487722693
    Feb 04, 2022 12:45:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-04T12:44:56.626Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 04, 2022 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:45:13.045Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 04, 2022 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:45:14.069Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 04, 2022 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:45:14.110Z: Expanding GroupByKey operations into optimizable parts.
    Feb 04, 2022 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:45:14.135Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 04, 2022 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:45:14.207Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 04, 2022 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:45:14.226Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 04, 2022 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:45:14.260Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 04, 2022 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:45:14.565Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 04, 2022 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:45:14.636Z: Starting 5 workers in us-central1-f...
    Feb 04, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:45:36.408Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 04, 2022 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:46:04.393Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 04, 2022 12:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:46:29.752Z: Workers have started successfully.
    Feb 04, 2022 12:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:46:29.770Z: Workers have started successfully.
    Feb 04, 2022 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-04T12:46:58.480Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDC1pb3lxU3dIQVQyehoCamQaAmly/streams/CAUaAmpkGgJpciD_5YbrBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1pb3lxU3dIQVQyehoCamQaAmly/streams/CAUaAmpkGgJpciD_5YbrBCgC': offset 70044 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1pb3lxU3dIQVQyehoCamQaAmly/streams/CAUaAmpkGgJpciD_5YbrBCgC': offset 70044 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 04, 2022 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-04T12:46:59.595Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDC1pb3lxU3dIQVQyehoCamQaAmly/streams/CAYaAmpkGgJpciDYuYe_AigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1pb3lxU3dIQVQyehoCamQaAmly/streams/CAYaAmpkGgJpciDYuYe_AigC': offset 72505 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1pb3lxU3dIQVQyehoCamQaAmly/streams/CAYaAmpkGgJpciDYuYe_AigC': offset 72505 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 04, 2022 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-04T12:47:00.483Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDC1pb3lxU3dIQVQyehoCamQaAmly/streams/CAkaAmpkGgJpciDrgubQASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1pb3lxU3dIQVQyehoCamQaAmly/streams/CAkaAmpkGgJpciDrgubQASgC': offset 77799 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDC1pb3lxU3dIQVQyehoCamQaAmly/streams/CAkaAmpkGgJpciDrgubQASgC': offset 77799 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 04, 2022 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:47:03.503Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 04, 2022 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:47:03.689Z: Cleaning up.
    Feb 04, 2022 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:47:03.778Z: Stopping worker pool...
    Feb 04, 2022 12:49:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:49:32.993Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 04, 2022 12:49:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T12:49:33.022Z: Worker pool stopped.
    Feb 04, 2022 12:49:39 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-04_04_44_53-6285543913487722693 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5fd802c7-96f6-49e9-ab13-e8064d16e8b0 and timestamp: 2022-02-04T12:49:39.132000000Z:
                     Metric:                    Value:
                 fields_read                 4890792.0
                   read_time                    10.524

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 04, 2022 12:49:39 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 12 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.006 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.004 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker Thread 3,5,main]) completed. Took 5 mins 5.049 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/x2fplf4tvzas4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2995

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2995/display/redirect?page=changes>

Changes:

[Heejong Lee] [BEAM-13813] Add support for URL artifact to extractStagingToPath

[avilovpavel6] Remove Python SQL Test example from catalog

[noreply] Update sdks/go/pkg/beam/artifact/materialize_test.go

[noreply] Merge pull request #16605 from [BEAM-13634][Playground] Create a

[noreply] Merge pull request #16593 from [BEAM-13725][Playground] Add graph to the

[noreply] Merge pull request #16699 from [BEAM-13789][Playground] Change logic of


------------------------------------------
[...truncated 357.37 KB...]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 04, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 04, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 04, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 04, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 04, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 04, 2022 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 04, 2022 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1313602972]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 04, 2022 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 04, 2022 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 04, 2022 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 04, 2022 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 04, 2022 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 04, 2022 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 04, 2022 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 04, 2022 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 04, 2022 6:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 04, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 04, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-H6a6pO_uYSIX9GgLf0Gh2wY5tC_UJVqnErEFXS5_jtE.jar
    Feb 04, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5137916894646915566.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-B8WrPZnHUJB0GWjETiBYL6z-Oe7-uhTnk0wJn-7-DVg.jar
    Feb 04, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.dataformat/jackson-dataformat-xml/2.13.0/396634365e8439b27794fa94b571fdc03b4cf7be/jackson-dataformat-xml-2.13.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jackson-dataformat-xml-2.13.0-Yyu4WPqE_RYmgacP7BZkHcWrY2pzow2igQ0noIVmu7o.jar
    Feb 04, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.immutables/value/2.8.8/d99fa1e04af5a1fda42fa9412d68eb7fe17a1071/value-2.8.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/value-2.8.8-k_DQ4kEt4FHC2WzIqOzB87qR2WH_Mw81deYbQ6-miFA.jar
    Feb 04, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.woodstox/woodstox-core/6.2.6/d4a055af5dcadea8d1bded5b64bc7be5bb7fea95/woodstox-core-6.2.6.jar to gs://temp-storage-for-perf-tests/loadtests/staging/woodstox-core-6.2.6-7RMZid5VnxZ00HVSgs8J3XgTkSFBRgr4IvXvHJtaCuE.jar
    Feb 04, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.inject.extensions/guice-servlet/3.0/610cde0e8da5a8b7d8efb8f0b8987466ffebaaf9/guice-servlet-3.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/guice-servlet-3.0-nnKkuFgoiNU8L0KX6TJ2o8FMgogBJEkPLaexap3xxhg.jar
    Feb 04, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mortbay.jetty/servlet-api/2.5-20081211/22bff70037e1e6fa7e6413149489552ee2064702/servlet-api-2.5-20081211.jar to gs://temp-storage-for-perf-tests/loadtests/staging/servlet-api-2.5-20081211-BodWCWmW_gD2BKw7ZnLW9mPcd36kqDBW4kDQRW535HI.jar
    Feb 04, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 6 files newly uploaded in 0 seconds
    Feb 04, 2022 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 04, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142352 bytes, hash 2fc12a1bfbc3a4e298ef41c1c0b6e947e448448a0b6ab15bcd5f35240781a103> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-L8EqG_vDpOKY70HBwLbpR-RIRIoLarFbzV81JAeBoQM.pb
    Feb 04, 2022 6:45:37 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 04, 2022 6:45:37 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 04, 2022 6:45:37 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 04, 2022 6:45:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 04, 2022 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-03_22_45_37-11281972867560371928?project=apache-beam-testing
    Feb 04, 2022 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-03_22_45_37-11281972867560371928
    Feb 04, 2022 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-03_22_45_37-11281972867560371928
    Feb 04, 2022 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-04T06:45:48.168Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:45:57.141Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:45:57.918Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:45:57.957Z: Expanding GroupByKey operations into optimizable parts.
    Feb 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:45:57.988Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:45:58.056Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:45:58.084Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:45:58.126Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:45:58.451Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 04, 2022 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:45:58.523Z: Starting 5 workers in us-central1-a...
    Feb 04, 2022 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:46:16.264Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 04, 2022 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:46:39.470Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 04, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:47:03.895Z: Workers have started successfully.
    Feb 04, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:47:03.925Z: Workers have started successfully.
    Feb 04, 2022 6:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-04T06:47:34.609Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEtlS2J6dl9OTm5KLRoCamQaAmly/streams/CAYaAmpkGgJpciCL8cfyBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEtlS2J6dl9OTm5KLRoCamQaAmly/streams/CAYaAmpkGgJpciCL8cfyBygC': offset 91129 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEtlS2J6dl9OTm5KLRoCamQaAmly/streams/CAYaAmpkGgJpciCL8cfyBygC': offset 91129 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 04, 2022 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:47:36.531Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 04, 2022 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:47:36.767Z: Cleaning up.
    Feb 04, 2022 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:47:36.849Z: Stopping worker pool...
    Feb 04, 2022 6:50:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:50:05.525Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 04, 2022 6:50:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T06:50:05.568Z: Worker pool stopped.
    Feb 04, 2022 6:50:16 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-03_22_45_37-11281972867560371928 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 041ef259-3310-4097-baaa-a99235a0096f and timestamp: 2022-02-04T06:50:16.603000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.348

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 04, 2022 6:50:16 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 5 mins 1.173 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 56s
165 actionable tasks: 104 executed, 59 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/h6v5dksmnolkc

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2994

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2994/display/redirect?page=changes>

Changes:

[Kiley Sok] Allow Java 17 to be used in SDK

[Kiley Sok] add testing support

[Kiley Sok] Add more testing support for java 17

[Kiley Sok] workaround for jamm

[Kiley Sok] Add Jenkins test for Java 17

[Kiley Sok] Fix jvm hex and skip errorprone

[Kiley Sok] Fix display data for anonymous classes

[Kiley Sok] fix jpms tests

[Kiley Sok] skip zetasql

[Kiley Sok] spotless

[Kiley Sok] spotless

[Kiley Sok] Fix trigger

[Kiley Sok] skip checker framework

[Kiley Sok] fix app name

[Kiley Sok] remove duplicate property check

[relax] Fix timer consistency in direct runner

[noreply] [BEAM-13757] adds pane observation in DoFn (#16629)

[Jan Lukavský] Change links to Books from Amazon to Publisher

[noreply] [BEAM-13605] Add support for pandas 1.4.0 (#16590)

[noreply] [BEAM-13761] adds Debezium IO wrapper for Go SDK (#16642)

[noreply] [BEAM-13024] Unify PipelineOptions behavior (#16719)


------------------------------------------
[...truncated 364.59 KB...]
    Feb 04, 2022 1:03:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 04, 2022 1:03:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 04, 2022 1:03:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 04, 2022 1:03:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 04, 2022 1:03:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 04, 2022 1:03:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 04, 2022 1:03:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 04, 2022 1:03:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 04, 2022 1:03:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 04, 2022 1:03:22 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 04, 2022 1:03:23 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-H6a6pO_uYSIX9GgLf0Gh2wY5tC_UJVqnErEFXS5_jtE.jar
    Feb 04, 2022 1:03:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6037902984711173707.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-CXpKCzGRN4tEAhpFsid4kjD30HpIfw_IXFwp0h1ThmY.jar
    Feb 04, 2022 1:03:23 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Feb 04, 2022 1:03:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 04, 2022 1:03:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142352 bytes, hash 543fd7d62f96fd81e4aef5cdcc6f1be8fd06a19fffb59dd59dddf517fd22f4ac> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-VD_X1i-W_YHkrvXNzG8b6P0GoZ__tZ3Vnd31F_0i9Kw.pb
    Feb 04, 2022 1:03:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 04, 2022 1:03:27 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 04, 2022 1:03:27 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 04, 2022 1:03:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 04, 2022 1:03:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-03_17_03_27-3668599170253383017?project=apache-beam-testing
    Feb 04, 2022 1:03:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-03_17_03_27-3668599170253383017
    Feb 04, 2022 1:03:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-03_17_03_27-3668599170253383017
    Feb 04, 2022 1:03:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-04T01:03:30.718Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 04, 2022 1:03:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:03:41.201Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 04, 2022 1:03:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:03:42.089Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 04, 2022 1:03:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:03:42.127Z: Expanding GroupByKey operations into optimizable parts.
    Feb 04, 2022 1:03:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:03:42.154Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 04, 2022 1:03:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:03:42.221Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 04, 2022 1:03:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:03:42.254Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 04, 2022 1:03:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:03:42.288Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 04, 2022 1:03:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:03:42.726Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 04, 2022 1:03:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:03:42.831Z: Starting 5 workers in us-central1-c...
    Feb 04, 2022 1:03:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:03:58.207Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 04, 2022 1:04:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:04:22.376Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 04, 2022 1:04:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:04:49.267Z: Workers have started successfully.
    Feb 04, 2022 1:04:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:04:49.291Z: Workers have started successfully.
    Feb 04, 2022 1:05:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-04T01:05:24.803Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHN6TzFBTmZQUjlucBoCamQaAmly/streams/CAEaAmpkGgJpciDt44LTBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHN6TzFBTmZQUjlucBoCamQaAmly/streams/CAEaAmpkGgJpciDt44LTBygC': offset 91065 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHN6TzFBTmZQUjlucBoCamQaAmly/streams/CAEaAmpkGgJpciDt44LTBygC': offset 91065 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 04, 2022 1:05:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-04T01:05:24.912Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHN6TzFBTmZQUjlucBoCamQaAmly/streams/CAUaAmpkGgJpciChrcGKBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHN6TzFBTmZQUjlucBoCamQaAmly/streams/CAUaAmpkGgJpciChrcGKBCgC': offset 65368 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHN6TzFBTmZQUjlucBoCamQaAmly/streams/CAUaAmpkGgJpciChrcGKBCgC': offset 65368 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 04, 2022 1:05:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:05:29.681Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 04, 2022 1:05:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:05:29.861Z: Cleaning up.
    Feb 04, 2022 1:05:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:05:29.936Z: Stopping worker pool...
    Feb 04, 2022 1:07:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:07:50.357Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 04, 2022 1:07:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-04T01:07:50.415Z: Worker pool stopped.
    Feb 04, 2022 1:07:57 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-03_17_03_27-3668599170253383017 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 83b80625-237c-456d-918b-f6d09a59ab3b and timestamp: 2022-02-04T01:07:57.838000000Z:
                     Metric:                    Value:
                   read_time                    10.873
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 04, 2022 1:07:57 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.04 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 51.17 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 45s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/cwsaigzzwgwci

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2993

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2993/display/redirect>

Changes:


------------------------------------------
[...truncated 363.01 KB...]
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-03_10_45_11-13409667959743723637
    Feb 03, 2022 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-03T18:45:14.423Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 03, 2022 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:45:23.009Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 03, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:45:23.595Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 03, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:45:23.628Z: Expanding GroupByKey operations into optimizable parts.
    Feb 03, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:45:23.655Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 03, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:45:23.745Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 03, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:45:23.788Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 03, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:45:23.833Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 03, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:45:24.185Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 03, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:45:24.251Z: Starting 5 workers in us-central1-f...
    Feb 03, 2022 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:45:34.345Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 03, 2022 6:46:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:46:09.980Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 03, 2022 6:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:46:34.306Z: Workers have started successfully.
    Feb 03, 2022 6:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:46:34.343Z: Workers have started successfully.
    Feb 03, 2022 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-03T18:47:05.233Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHA3S2JabXVJYUhmVhoCamQaAmly/streams/CAUaAmpkGgJpciCM_qjlBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHA3S2JabXVJYUhmVhoCamQaAmly/streams/CAUaAmpkGgJpciCM_qjlBSgC': offset 95580 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHA3S2JabXVJYUhmVhoCamQaAmly/streams/CAUaAmpkGgJpciCM_qjlBSgC': offset 95580 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 03, 2022 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-03T18:47:06.136Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHA3S2JabXVJYUhmVhoCamQaAmly/streams/CAMaAmpkGgJpciDVqND4AygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHA3S2JabXVJYUhmVhoCamQaAmly/streams/CAMaAmpkGgJpciDVqND4AygC': offset 70656 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHA3S2JabXVJYUhmVhoCamQaAmly/streams/CAMaAmpkGgJpciDVqND4AygC': offset 70656 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 03, 2022 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-03T18:47:06.226Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHA3S2JabXVJYUhmVhoCamQaAmly/streams/CAYaAmpkGgJpciCXucmEBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHA3S2JabXVJYUhmVhoCamQaAmly/streams/CAYaAmpkGgJpciCXucmEBCgC': offset 101858 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHA3S2JabXVJYUhmVhoCamQaAmly/streams/CAYaAmpkGgJpciCXucmEBCgC': offset 101858 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 03, 2022 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:47:08.227Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 03, 2022 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:47:08.368Z: Cleaning up.
    Feb 03, 2022 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:47:08.449Z: Stopping worker pool...
    Feb 03, 2022 6:49:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:49:35.868Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 03, 2022 6:49:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T18:49:35.922Z: Worker pool stopped.
    Feb 03, 2022 6:49:44 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-03_10_45_11-13409667959743723637 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): dc5f58f2-9629-442b-8644-9612b3bed5da and timestamp: 2022-02-03T18:49:44.082000000Z:
                     Metric:                    Value:
                   read_time                     9.848
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 03, 2022 6:49:44 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 55.024 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 24s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/tvj3aeguuhzvm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2992

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2992/display/redirect>

Changes:


------------------------------------------
[...truncated 387.12 KB...]
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 03, 2022 3:34:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 03, 2022 3:34:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 03, 2022 3:34:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 03, 2022 3:34:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 03, 2022 3:34:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 03, 2022 3:34:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 03, 2022 3:34:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1216741689]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 03, 2022 3:34:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 03, 2022 3:34:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 03, 2022 3:34:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 03, 2022 3:34:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 03, 2022 3:34:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 03, 2022 3:34:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 03, 2022 3:34:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 03, 2022 3:34:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 03, 2022 3:34:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 03, 2022 3:34:16 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 03, 2022 3:34:16 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-pnoPFzsrg0uK36iFTKbiIklaVbWhTEnaObbCgM0jHcI.jar
    Feb 03, 2022 3:34:17 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.37.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.37.0-SNAPSHOT-vPEGqxI9MKGLtuCuQz0HzI-33fdG7YkDTKUwhwE408c.jar
    Feb 03, 2022 3:34:17 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2135603203249865324.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-RFqKXeoPB4RnTeKFmY3hNAZSjRLktwtCcW5muqqEyVE.jar
    Feb 03, 2022 3:34:18 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/join-library/build/libs/beam-sdks-java-extensions-join-library-2.37.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-join-library-2.37.0-SNAPSHOT-xz-aSlhizJi8jigGo3amZ2g_MkglE3dBulicsi2UX5U.jar
    Feb 03, 2022 3:34:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.37.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.37.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Feb 03, 2022 3:34:23 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 358 files cached, 4 files newly uploaded in 7 seconds
    Feb 03, 2022 3:34:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 03, 2022 3:34:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 2d0f41428aec96ce3d977835911a77340f8f40594a0d7a8152262fa338be95d9> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-LQ9BQorsls49l3g1kRp3NA-PQFlKDXqBUiYvozi-ldk.pb
    Feb 03, 2022 3:34:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 03, 2022 3:34:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 03, 2022 3:34:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 03, 2022 3:34:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 03, 2022 3:34:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-03_07_34_34-183624775264119218?project=apache-beam-testing
    Feb 03, 2022 3:34:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-03_07_34_34-183624775264119218
    Feb 03, 2022 3:34:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-03_07_34_34-183624775264119218
    Feb 03, 2022 3:34:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-03T15:34:37.957Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 03, 2022 3:34:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:34:48.781Z: Worker configuration: e2-standard-2 in us-central1-b.
    Feb 03, 2022 3:34:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:34:49.425Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 03, 2022 3:34:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:34:49.473Z: Expanding GroupByKey operations into optimizable parts.
    Feb 03, 2022 3:34:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:34:49.502Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 03, 2022 3:34:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:34:49.570Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 03, 2022 3:34:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:34:49.596Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 03, 2022 3:34:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:34:49.630Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 03, 2022 3:34:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:34:49.965Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 03, 2022 3:34:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:34:50.053Z: Starting 5 workers in us-central1-b...
    Feb 03, 2022 3:35:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:35:08.990Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 03, 2022 3:35:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:35:35.508Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 03, 2022 3:36:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:36:00.999Z: Workers have started successfully.
    Feb 03, 2022 3:36:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:36:01.048Z: Workers have started successfully.
    Feb 03, 2022 3:36:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-03T15:36:33.484Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG1HNU13R19xREl1SRoCamQaAmly/streams/CAQaAmpkGgJpciDUvLSbBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG1HNU13R19xREl1SRoCamQaAmly/streams/CAQaAmpkGgJpciDUvLSbBSgC': offset 64654 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG1HNU13R19xREl1SRoCamQaAmly/streams/CAQaAmpkGgJpciDUvLSbBSgC': offset 64654 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 03, 2022 3:36:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:36:36.669Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 03, 2022 3:36:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:36:36.826Z: Cleaning up.
    Feb 03, 2022 3:36:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:36:36.895Z: Stopping worker pool...
    Feb 03, 2022 3:38:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:38:56.567Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 03, 2022 3:38:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T15:38:56.602Z: Worker pool stopped.
    Feb 03, 2022 3:39:03 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-03_07_34_34-183624775264119218 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8f23c188-6ba4-4027-ac1a-b72bfb51cd3b and timestamp: 2022-02-03T15:39:03.444000000Z:
                     Metric:                    Value:
                   read_time                    11.077
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 03, 2022 3:39:03 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.087 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.135 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 5 mins 32.816 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 9m 18s
165 actionable tasks: 104 executed, 59 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ldccxynq75im2

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2991

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2991/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13605] Modify groupby.apply implementation in preparation for

[noreply] Merge pull request #16436 from [BEAM-1330] - DatastoreIO Writes should


------------------------------------------
[...truncated 360.66 KB...]
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 03, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 03, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 03, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 03, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 03, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 03, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 03, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@434699861]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 03, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 03, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 03, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 03, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 03, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 03, 2022 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 03, 2022 6:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 03, 2022 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 03, 2022 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 03, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 03, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-pnoPFzsrg0uK36iFTKbiIklaVbWhTEnaObbCgM0jHcI.jar
    Feb 03, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5493892823438737526.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-CboE1hPAVLzKGM0RmevBLWRSwv0SF33xu-STAiUbYyI.jar
    Feb 03, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Feb 03, 2022 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 03, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142352 bytes, hash 6de45d1dae57b047dce48be79cf709345e10c2fd339373246efc85a86c71a03d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-beRdHa5XsEfc5IvnnPcJNF4Qwv0zk3MkbvyFqGxxoD0.pb
    Feb 03, 2022 6:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 03, 2022 6:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 03, 2022 6:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 03, 2022 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 03, 2022 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-02_22_45_34-12536326762998578585?project=apache-beam-testing
    Feb 03, 2022 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-02_22_45_34-12536326762998578585
    Feb 03, 2022 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-02_22_45_34-12536326762998578585
    Feb 03, 2022 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-03T06:45:38.370Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 03, 2022 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:45:47.015Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 03, 2022 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:45:47.889Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 03, 2022 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:45:47.932Z: Expanding GroupByKey operations into optimizable parts.
    Feb 03, 2022 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:45:47.960Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 03, 2022 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:45:48.023Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 03, 2022 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:45:48.077Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 03, 2022 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:45:48.100Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 03, 2022 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:45:48.614Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 03, 2022 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:45:48.689Z: Starting 5 workers in us-central1-a...
    Feb 03, 2022 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:45:53.889Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 03, 2022 6:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:46:40.160Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 03, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:47:03.532Z: Workers have started successfully.
    Feb 03, 2022 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:47:03.558Z: Workers have started successfully.
    Feb 03, 2022 6:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-03T06:47:34.625Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDE5tbW02am9NNTNUORoCamQaAmly/streams/CAgaAmpkGgJpciDg5e6eASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDE5tbW02am9NNTNUORoCamQaAmly/streams/CAgaAmpkGgJpciDg5e6eASgC': offset 91398 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDE5tbW02am9NNTNUORoCamQaAmly/streams/CAgaAmpkGgJpciDg5e6eASgC': offset 91398 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 03, 2022 6:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:47:36.444Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 03, 2022 6:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:47:36.571Z: Cleaning up.
    Feb 03, 2022 6:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:47:36.649Z: Stopping worker pool...
    Feb 03, 2022 6:50:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:50:15.937Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 03, 2022 6:50:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T06:50:16.015Z: Worker pool stopped.
    Feb 03, 2022 6:50:23 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-02_22_45_34-12536326762998578585 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cd5f4529-30cd-49ec-9761-395f6c6975a3 and timestamp: 2022-02-03T06:50:23.364000000Z:
                     Metric:                    Value:
                   read_time                     9.632
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 03, 2022 6:50:23 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 5 mins 9.409 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 2s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/u7vmsopaqsqhw

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2990

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2990/display/redirect?page=changes>

Changes:

[aydar.zaynutdinov] [BEAM-13737][Playground]

[Robert Bradshaw] Make num-stages counter into an internal counter.

[Kenneth Knowles] [BEAM-13768] Fix NullPointerException in BigQueryStorageSourceBase

[Robert Bradshaw] Avoid packaging avro in the java harness jar.

[noreply] [BEAM-13430]  Fix provided configuration by removing extendsFrom for

[noreply] [BEAM-12830] Print clearer go version fail message (#16693)

[Jan Lukavský] Add reference to Books to Learning Resources in website

[noreply] Use ThreadLocal for DESERIALIZATION_CONTEXT (#16680)

[noreply] Minor: Add apt update after adding deadsnakes repository in dev env

[noreply] [BEAM-13807] Regenerate container images to get TF 2.8.0 (#16707)

[noreply] [BEAM-13399, BEAM-13683] Eagerly materialize artifacts for automated

[noreply] Merge pull request #16617 from [BEAM-13743] [Playground] Add context

[noreply] Merge pull request #16618 from [BEAM-13744] [Playground] Add context

[noreply] Merge pull request #16698 from [BEAM-13802][Playground] [Bugfix] Clean

[noreply] [BEAM-13293][BEAM-13806] Pipe a SchemaIO flag through Go integration


------------------------------------------
[...truncated 377.72 KB...]
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHViN0h4dDNoYjRyNRoCamQaAmly/streams/CAgaAmpkGgJpciC39sCkASgC': offset 91842 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 03, 2022 1:05:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-03T01:05:33.298Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHViN0h4dDNoYjRyNRoCamQaAmly/streams/CAMaAmpkGgJpciCBiKMRKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHViN0h4dDNoYjRyNRoCamQaAmly/streams/CAMaAmpkGgJpciCBiKMRKAI': offset 72262 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHViN0h4dDNoYjRyNRoCamQaAmly/streams/CAMaAmpkGgJpciCBiKMRKAI': offset 72262 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 03, 2022 1:05:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-03T01:05:33.299Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHViN0h4dDNoYjRyNRoCamQaAmly/streams/CAUaAmpkGgJpciDD1ueuBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHViN0h4dDNoYjRyNRoCamQaAmly/streams/CAUaAmpkGgJpciDD1ueuBigC': offset 68525 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHViN0h4dDNoYjRyNRoCamQaAmly/streams/CAUaAmpkGgJpciDD1ueuBigC': offset 68525 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 03, 2022 1:05:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-03T01:05:34.089Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHViN0h4dDNoYjRyNRoCamQaAmly/streams/CAkaAmpkGgJpciDp2ODkAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHViN0h4dDNoYjRyNRoCamQaAmly/streams/CAkaAmpkGgJpciDp2ODkAygC': offset 95197 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHViN0h4dDNoYjRyNRoCamQaAmly/streams/CAkaAmpkGgJpciDp2ODkAygC': offset 95197 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 03, 2022 1:05:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T01:05:38.799Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 03, 2022 1:05:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T01:05:38.963Z: Cleaning up.
    Feb 03, 2022 1:05:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T01:05:39.041Z: Stopping worker pool...
    Feb 03, 2022 1:08:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T01:08:01.882Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 03, 2022 1:08:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-03T01:08:01.956Z: Worker pool stopped.
    Feb 03, 2022 1:08:10 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-02_17_03_42-16932756428406606693 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): eeeed088-bc97-4b86-b8e0-2949721ec0a1 and timestamp: 2022-02-03T01:08:10.218000000Z:
                     Metric:                    Value:
                   read_time                     12.26
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 03, 2022 1:08:10 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 47.423 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 35s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/fyvwpsfcczrfk

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2989

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2989/display/redirect?page=changes>

Changes:

[mmack] [BEAM-13563] Restructure Kinesis Source for Aws 2 internally to prepare

[noreply] [BEAM-4665] Allow joining a running dataflow pipeline without throwing

[noreply] [BEAM-13801] Add standard coder tests for state backed iterable.


------------------------------------------
[...truncated 373.71 KB...]
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEd5WG0xdElnSDBRZBoCamQaAmly/streams/CAMaAmpkGgJpciD-44fIBygC': offset 69952 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 02, 2022 6:48:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-02T18:48:21.951Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEd5WG0xdElnSDBRZBoCamQaAmly/streams/CAkaAmpkGgJpciDLm_CXBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEd5WG0xdElnSDBRZBoCamQaAmly/streams/CAkaAmpkGgJpciDLm_CXBygC': offset 100982 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEd5WG0xdElnSDBRZBoCamQaAmly/streams/CAkaAmpkGgJpciDLm_CXBygC': offset 100982 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 02, 2022 6:48:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-02T18:48:21.965Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEd5WG0xdElnSDBRZBoCamQaAmly/streams/CAQaAmpkGgJpciDB1Y-XAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEd5WG0xdElnSDBRZBoCamQaAmly/streams/CAQaAmpkGgJpciDB1Y-XAygC': offset 91968 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEd5WG0xdElnSDBRZBoCamQaAmly/streams/CAQaAmpkGgJpciDB1Y-XAygC': offset 91968 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 02, 2022 6:48:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-02T18:48:22.013Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEd5WG0xdElnSDBRZBoCamQaAmly/streams/CAgaAmpkGgJpciD65-S8BygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEd5WG0xdElnSDBRZBoCamQaAmly/streams/CAgaAmpkGgJpciD65-S8BygC': offset 66151 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEd5WG0xdElnSDBRZBoCamQaAmly/streams/CAgaAmpkGgJpciD65-S8BygC': offset 66151 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 02, 2022 6:48:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T18:48:26.100Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 02, 2022 6:48:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T18:48:26.281Z: Cleaning up.
    Feb 02, 2022 6:48:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T18:48:26.360Z: Stopping worker pool...
    Feb 02, 2022 6:50:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T18:50:49.627Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 02, 2022 6:50:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T18:50:49.677Z: Worker pool stopped.
    Feb 02, 2022 6:50:55 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-02_10_46_25-1308613348136127911 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c0e6036c-0e9e-4e16-a876-edf222ca1881 and timestamp: 2022-02-02T18:50:55.176000000Z:
                     Metric:                    Value:
                   read_time                    11.199
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 02, 2022 6:50:55 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 50.695 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 34s
165 actionable tasks: 105 executed, 58 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/tlwr23bva4wmu

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2988

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2988/display/redirect>

Changes:


------------------------------------------
[...truncated 348.65 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 02, 2022 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 02, 2022 12:45:09 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 02, 2022 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 02, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 02, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 02, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 02, 2022 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 02, 2022 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 02, 2022 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 02, 2022 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@741767653]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@190838539]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 02, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 02, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 02, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 02, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 02, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 02, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Feb 02, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.37.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.37.0-SNAPSHOT-tests-WW_qh_qZ0OtxNK1HIZTckB7QnP0NV0saj8TCBpbuXbM.jar
    Feb 02, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.37.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.37.0-SNAPSHOT-tests-9waRkOC4eM0X7ABYrPq1tRErTw5OgxB_aYDH-VkPcDE.jar
    Feb 02, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7100366168725260592.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-9mXpBLswDPjYX8qQehVAdytkIozgjrul0NK1HTm64UY.jar
    Feb 02, 2022 12:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 359 files cached, 3 files newly uploaded in 4 seconds
    Feb 02, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 02, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash e62da3c0d839350dc04bac4c0809bc6da7ce4d308cf885ac2ae8d366c5e7a03f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-5i2jwNg5NQ3AS6xMCAm8bafOTTCM-IWsKujTZsXnoD8.pb
    Feb 02, 2022 12:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 02, 2022 12:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 02, 2022 12:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 02, 2022 12:45:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 02, 2022 12:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-02_04_45_28-2707412612198165738?project=apache-beam-testing
    Feb 02, 2022 12:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-02_04_45_28-2707412612198165738
    Feb 02, 2022 12:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-02_04_45_28-2707412612198165738
    Feb 02, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-02T12:45:31.933Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 02, 2022 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:45:39.730Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 02, 2022 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:45:40.497Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 02, 2022 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:45:40.543Z: Expanding GroupByKey operations into optimizable parts.
    Feb 02, 2022 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:45:40.575Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 02, 2022 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:45:40.655Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 02, 2022 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:45:40.696Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 02, 2022 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:45:40.724Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 02, 2022 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:45:41.122Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 02, 2022 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:45:41.194Z: Starting 5 workers in us-central1-c...
    Feb 02, 2022 12:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:45:47.805Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 02, 2022 12:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:46:21.801Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 02, 2022 12:46:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:46:46.347Z: Workers have started successfully.
    Feb 02, 2022 12:46:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:46:46.379Z: Workers have started successfully.
    Feb 02, 2022 12:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:47:16.974Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 02, 2022 12:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:47:17.155Z: Cleaning up.
    Feb 02, 2022 12:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:47:17.251Z: Stopping worker pool...
    Feb 02, 2022 12:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:49:41.352Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 02, 2022 12:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T12:49:41.399Z: Worker pool stopped.
    Feb 02, 2022 12:49:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-02_04_45_28-2707412612198165738 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c2d371df-7e6d-4c54-bd53-b93de1dd62f6 and timestamp: 2022-02-02T12:49:47.478000000Z:
                     Metric:                    Value:
                   read_time                      7.48
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 02, 2022 12:49:47 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 42.443 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/5uuybcdcrhmeg

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2987

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2987/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13574] Large Wordcount (#16455)

[noreply] [BEAM-13293] Refactor JDBC IO Go Wrapper (#16686)

[noreply] Edit license script for Java, add manual licenses for xz (#16692)


------------------------------------------
[...truncated 353.65 KB...]
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 02, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 02, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 02, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 02, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 02, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 02, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 02, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@190838539]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 02, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 02, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 02, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 02, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 02, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 02, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 02, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 02, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 02, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 02, 2022 6:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 02, 2022 6:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Feb 02, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3494169093724190073.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-MgZUFMUyNTMbUw4GqUDa5hJJfpIor8vRvCgSK7jd88I.jar
    Feb 02, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_28_0/0.2/b8ec320b972b575ab37767bf8d4cfadff1fe304a/beam-vendor-calcite-1_28_0-0.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_28_0-0.2-pvjNvR5NntriHz0ja9OKytRXl6tOlifYWKUI0wMGtVo.jar
    Feb 02, 2022 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 360 files cached, 2 files newly uploaded in 2 seconds
    Feb 02, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 02, 2022 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 47abb89cf05b53d239d8c25a3d67b8e5a9eb7e4873a560e64a93d5d82349796a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-R6u4nPBbU9I52MJaPWe45anrfkhzpWDmSpPV2CNJeWo.pb
    Feb 02, 2022 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 02, 2022 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 02, 2022 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 02, 2022 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 02, 2022 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-01_22_45_17-13297275722883028584?project=apache-beam-testing
    Feb 02, 2022 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-01_22_45_17-13297275722883028584
    Feb 02, 2022 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-01_22_45_17-13297275722883028584
    Feb 02, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-02T06:45:21.248Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 02, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:45:30.913Z: Worker configuration: e2-standard-2 in us-central1-c.
    Feb 02, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:45:32.118Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 02, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:45:32.160Z: Expanding GroupByKey operations into optimizable parts.
    Feb 02, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:45:32.190Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 02, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:45:32.253Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 02, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:45:32.280Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 02, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:45:32.306Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 02, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:45:32.653Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 02, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:45:32.720Z: Starting 5 workers in us-central1-c...
    Feb 02, 2022 6:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:45:55.106Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 02, 2022 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:46:15.511Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 02, 2022 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:46:38.984Z: Workers have started successfully.
    Feb 02, 2022 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:46:39.014Z: Workers have started successfully.
    Feb 02, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-02T06:47:08.955Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDlSQkotX2Q5ck4zYRoCamQaAmly/streams/CAgaAmpkGgJpciDvq82zAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDlSQkotX2Q5ck4zYRoCamQaAmly/streams/CAgaAmpkGgJpciDvq82zAygC': offset 118782 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDlSQkotX2Q5ck4zYRoCamQaAmly/streams/CAgaAmpkGgJpciDvq82zAygC': offset 118782 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 02, 2022 6:47:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:47:10.704Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 02, 2022 6:47:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:47:10.880Z: Cleaning up.
    Feb 02, 2022 6:47:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:47:10.969Z: Stopping worker pool...
    Feb 02, 2022 6:49:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:49:31.498Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 02, 2022 6:49:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T06:49:31.573Z: Worker pool stopped.
    Feb 02, 2022 6:49:37 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-01_22_45_17-13297275722883028584 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e5e4f639-3016-4cc2-9db9-f53094b29dd3 and timestamp: 2022-02-02T06:49:37.450000000Z:
                     Metric:                    Value:
                   read_time                     9.174
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 02, 2022 6:49:37 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.174 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.173 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 42.942 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/527nkhkfojmx4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2986

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2986/display/redirect?page=changes>

Changes:

[career] [BEAM-13734] Support cache directories that use GCS buckets

[noreply] Merge pull request #16655 from [BEAM-12164]: Add retry protection to

[noreply] Merge pull request #16586 from [BEAM-13731] FhirIO: Add support for

[noreply] [BEAM-13011] Adds a link to Multi-language Pipelines Tips wiki page

[noreply] [BEAM-12572] Run python examples on multiple runners (#16154)


------------------------------------------
[...truncated 369.73 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-01_16_46_30-10494215165743532027
    Feb 02, 2022 12:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-02T00:46:33.797Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 02, 2022 12:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:46:43.586Z: Worker configuration: e2-standard-2 in us-central1-f.
    Feb 02, 2022 12:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:46:44.768Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 02, 2022 12:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:46:44.803Z: Expanding GroupByKey operations into optimizable parts.
    Feb 02, 2022 12:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:46:44.848Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 02, 2022 12:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:46:44.928Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 02, 2022 12:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:46:44.964Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 02, 2022 12:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:46:45.000Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 02, 2022 12:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:46:45.360Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 02, 2022 12:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:46:45.426Z: Starting 5 workers in us-central1-f...
    Feb 02, 2022 12:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:47:05.461Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 02, 2022 12:47:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:47:33.621Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 02, 2022 12:48:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:47:59.161Z: Workers have started successfully.
    Feb 02, 2022 12:48:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:47:59.181Z: Workers have started successfully.
    Feb 02, 2022 12:48:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-02T00:48:29.210Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFlTa0hKWWtIanprcxoCamQaAmly/streams/CAgaAmpkGgJpciCauP6MAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFlTa0hKWWtIanprcxoCamQaAmly/streams/CAgaAmpkGgJpciCauP6MAygC': offset 80714 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFlTa0hKWWtIanprcxoCamQaAmly/streams/CAgaAmpkGgJpciCauP6MAygC': offset 80714 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 02, 2022 12:48:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-02T00:48:29.602Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFlTa0hKWWtIanprcxoCamQaAmly/streams/CAEaAmpkGgJpciD9tNvCBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFlTa0hKWWtIanprcxoCamQaAmly/streams/CAEaAmpkGgJpciD9tNvCBSgC': offset 71282 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFlTa0hKWWtIanprcxoCamQaAmly/streams/CAEaAmpkGgJpciD9tNvCBSgC': offset 71282 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 02, 2022 12:48:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-02T00:48:31.609Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFlTa0hKWWtIanprcxoCamQaAmly/streams/CAMaAmpkGgJpciCdutu1BSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFlTa0hKWWtIanprcxoCamQaAmly/streams/CAMaAmpkGgJpciCdutu1BSgC': offset 66439 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFlTa0hKWWtIanprcxoCamQaAmly/streams/CAMaAmpkGgJpciCdutu1BSgC': offset 66439 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 02, 2022 12:48:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:48:33.639Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 02, 2022 12:48:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:48:33.789Z: Cleaning up.
    Feb 02, 2022 12:48:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:48:33.862Z: Stopping worker pool...
    Feb 02, 2022 12:51:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:51:05.403Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 02, 2022 12:51:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-02T00:51:05.449Z: Worker pool stopped.
    Feb 02, 2022 12:51:13 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-01_16_46_30-10494215165743532027 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 58d10437-0656-4e03-933e-872a7f5852d7 and timestamp: 2022-02-02T00:51:13.197000000Z:
                     Metric:                    Value:
                   read_time                    10.433
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 02, 2022 12:51:13 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 5 mins 2.822 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 54s
165 actionable tasks: 106 executed, 57 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/6kkiy4v73cfc6

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2985

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2985/display/redirect>

Changes:


------------------------------------------
[...truncated 359.70 KB...]
    Feb 01, 2022 7:36:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 01, 2022 7:36:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 7:36:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 01, 2022 7:36:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 01, 2022 7:36:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 7:36:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 01, 2022 7:36:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 01, 2022 7:36:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 01, 2022 7:36:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 01, 2022 7:36:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 01, 2022 7:36:10 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Feb 01, 2022 7:36:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test537636949990587421.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-vSBKQUImxYbhBMDNAdNkf5g_SKSK8N-xD-GhRcBAZnY.jar
    Feb 01, 2022 7:36:12 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 1 seconds
    Feb 01, 2022 7:36:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 01, 2022 7:36:12 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 9b3c4dedda1275bad888851fe344901c8b76512ff18e82dbbe1d0e0ac06e5c81> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-mzxN7doSdbrYiIUf40SQHIt2US_xjoLbvh0OCsBuXIE.pb
    Feb 01, 2022 7:36:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 01, 2022 7:36:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 01, 2022 7:36:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 01, 2022 7:36:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 01, 2022 7:36:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-01_11_36_15-2660206615217096671?project=apache-beam-testing
    Feb 01, 2022 7:36:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-01_11_36_15-2660206615217096671
    Feb 01, 2022 7:36:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-01_11_36_15-2660206615217096671
    Feb 01, 2022 7:36:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-01T19:36:19.094Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 01, 2022 7:36:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:36:29.865Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 01, 2022 7:36:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:36:30.600Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 01, 2022 7:36:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:36:30.652Z: Expanding GroupByKey operations into optimizable parts.
    Feb 01, 2022 7:36:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:36:30.688Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 01, 2022 7:36:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:36:30.787Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 01, 2022 7:36:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:36:30.826Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 01, 2022 7:36:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:36:30.881Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 01, 2022 7:36:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:36:31.260Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 01, 2022 7:36:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:36:31.447Z: Starting 5 workers in us-central1-a...
    Feb 01, 2022 7:36:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:36:50.279Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 01, 2022 7:37:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:37:16.086Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 01, 2022 7:37:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:37:40.656Z: Workers have started successfully.
    Feb 01, 2022 7:37:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:37:40.698Z: Workers have started successfully.
    Feb 01, 2022 7:38:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-01T19:38:09.001Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEY2T3d1X1pCZ2JwehoCamQaAmly/streams/CAMaAmpkGgJpciCAw7vZBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEY2T3d1X1pCZ2JwehoCamQaAmly/streams/CAMaAmpkGgJpciCAw7vZBSgC': offset 90641 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEY2T3d1X1pCZ2JwehoCamQaAmly/streams/CAMaAmpkGgJpciCAw7vZBSgC': offset 90641 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 01, 2022 7:38:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-01T19:38:09.994Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEY2T3d1X1pCZ2JwehoCamQaAmly/streams/CAYaAmpkGgJpciDVm4vvASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEY2T3d1X1pCZ2JwehoCamQaAmly/streams/CAYaAmpkGgJpciDVm4vvASgC': offset 90044 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEY2T3d1X1pCZ2JwehoCamQaAmly/streams/CAYaAmpkGgJpciDVm4vvASgC': offset 90044 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 01, 2022 7:38:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:38:11.915Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 01, 2022 7:38:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:38:12.135Z: Cleaning up.
    Feb 01, 2022 7:38:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:38:12.229Z: Stopping worker pool...
    Feb 01, 2022 7:40:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:40:35.749Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 01, 2022 7:40:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T19:40:35.807Z: Worker pool stopped.
    Feb 01, 2022 7:40:43 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-01_11_36_15-2660206615217096671 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3ab49eef-184a-47d9-ae66-a8742b871f65 and timestamp: 2022-02-01T19:40:43.690000000Z:
                     Metric:                    Value:
                   read_time                      9.05
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 01, 2022 7:40:43 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 49.728 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 35s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/aygwvs63oehlo

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2984

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2984/display/redirect>

Changes:


------------------------------------------
[...truncated 346.08 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 435a6102abe0742e0fc3bc8dbe43508a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 01, 2022 1:54:09 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 01, 2022 1:54:10 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 01, 2022 1:54:11 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 01, 2022 1:54:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 01, 2022 1:54:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 1:54:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 01, 2022 1:54:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 01, 2022 1:54:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 1:54:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 01, 2022 1:54:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@741767653]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 01, 2022 1:54:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 01, 2022 1:54:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 1:54:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 01, 2022 1:54:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 01, 2022 1:54:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 1:54:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 01, 2022 1:54:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1460265227]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 01, 2022 1:54:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 01, 2022 1:54:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 1:54:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 01, 2022 1:54:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 01, 2022 1:54:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 1:54:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 01, 2022 1:54:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 01, 2022 1:54:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 01, 2022 1:54:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 01, 2022 1:54:22 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 01, 2022 1:54:23 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Feb 01, 2022 1:54:23 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8206417087390040280.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SEz2XqFZ-T-15qOz1bl4ER4xMIgZYbTi_V1w0lJp-t4.jar
    Feb 01, 2022 1:54:23 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Feb 01, 2022 1:54:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 01, 2022 1:54:23 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 24b2f4cec6bee5bc07c097529f9757063de32392071cb53502546733bd8e8b36> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-JLL0zsa-5bwHwJdSn5dXBj3jI5IHHLU1AlRnM72OizY.pb
    Feb 01, 2022 1:54:26 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 01, 2022 1:54:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 01, 2022 1:54:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 01, 2022 1:54:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 01, 2022 1:54:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-02-01_05_54_27-12786019739755007642?project=apache-beam-testing
    Feb 01, 2022 1:54:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-02-01_05_54_27-12786019739755007642
    Feb 01, 2022 1:54:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-02-01_05_54_27-12786019739755007642
    Feb 01, 2022 1:54:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-01T13:54:30.558Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 01, 2022 1:54:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:54:39.112Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 01, 2022 1:54:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:54:39.894Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 01, 2022 1:54:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:54:39.921Z: Expanding GroupByKey operations into optimizable parts.
    Feb 01, 2022 1:54:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:54:39.956Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 01, 2022 1:54:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:54:40.115Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 01, 2022 1:54:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:54:40.169Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 01, 2022 1:54:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:54:40.224Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 01, 2022 1:54:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:54:41.015Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 01, 2022 1:54:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:54:41.128Z: Starting 5 workers in us-central1-a...
    Feb 01, 2022 1:55:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:55:05.344Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 01, 2022 1:55:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:55:30.544Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 01, 2022 1:55:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:55:56.014Z: Workers have started successfully.
    Feb 01, 2022 1:55:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:55:56.041Z: Workers have started successfully.
    Feb 01, 2022 1:56:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:56:23.444Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 01, 2022 1:56:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:56:23.601Z: Cleaning up.
    Feb 01, 2022 1:56:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:56:23.717Z: Stopping worker pool...
    Feb 01, 2022 1:58:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:58:53.217Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 01, 2022 1:58:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T13:58:53.272Z: Worker pool stopped.
    Feb 01, 2022 1:58:59 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-02-01_05_54_27-12786019739755007642 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0182ead0-a2c5-4346-9889-9f70f6c7316d and timestamp: 2022-02-01T13:58:59.412000000Z:
                     Metric:                    Value:
                   read_time                     6.173
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 01, 2022 1:58:59 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 53.612 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 22s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/r4clg6vhkcqsk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2983

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2983/display/redirect>

Changes:


------------------------------------------
[...truncated 354.86 KB...]
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 01, 2022 7:32:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 01, 2022 7:32:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 7:33:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 01, 2022 7:33:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 01, 2022 7:33:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 7:33:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 01, 2022 7:33:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1460265227]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 01, 2022 7:33:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 01, 2022 7:33:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 7:33:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 01, 2022 7:33:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 01, 2022 7:33:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 7:33:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 01, 2022 7:33:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 01, 2022 7:33:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 01, 2022 7:33:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 01, 2022 7:33:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 01, 2022 7:33:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Feb 01, 2022 7:33:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7486402332439550669.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test--aEgnbyXglPbb1INm0bgkBxbEen_M-pOlBYGIjYoXA8.jar
    Feb 01, 2022 7:33:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Feb 01, 2022 7:33:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 01, 2022 7:33:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash cb838dc4d9b200ffad843f692919bb1c9d98b8e94e9511787dbd78b21ebb47b5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-y4ONxNmyAP-thD9pKRm7HJ2YuOlOlRF4fb14sh67R7U.pb
    Feb 01, 2022 7:33:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 01, 2022 7:33:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 01, 2022 7:33:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 01, 2022 7:33:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 01, 2022 7:33:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-31_23_33_09-14655953042806830612?project=apache-beam-testing
    Feb 01, 2022 7:33:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-31_23_33_09-14655953042806830612
    Feb 01, 2022 7:33:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-31_23_33_09-14655953042806830612
    Feb 01, 2022 7:33:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-01T07:33:12.871Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 01, 2022 7:33:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:33:24.445Z: Worker configuration: e2-standard-2 in us-central1-b.
    Feb 01, 2022 7:33:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:33:25.109Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 01, 2022 7:33:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:33:25.170Z: Expanding GroupByKey operations into optimizable parts.
    Feb 01, 2022 7:33:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:33:25.229Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 01, 2022 7:33:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:33:25.312Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 01, 2022 7:33:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:33:25.343Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 01, 2022 7:33:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:33:25.374Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 01, 2022 7:33:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:33:25.790Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 01, 2022 7:33:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:33:25.861Z: Starting 5 workers in us-central1-b...
    Feb 01, 2022 7:33:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:33:46.913Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 01, 2022 7:34:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:34:07.048Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 01, 2022 7:34:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:34:33.541Z: Workers have started successfully.
    Feb 01, 2022 7:34:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:34:33.577Z: Workers have started successfully.
    Feb 01, 2022 7:35:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-02-01T07:35:02.618Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDZSMmYzdGVZdEpPXxoCamQaAmly/streams/CAIaAmpkGgJpciDupqq3AigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDZSMmYzdGVZdEpPXxoCamQaAmly/streams/CAIaAmpkGgJpciDupqq3AigC': offset 76714 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDZSMmYzdGVZdEpPXxoCamQaAmly/streams/CAIaAmpkGgJpciDupqq3AigC': offset 76714 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Feb 01, 2022 7:35:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:35:05.435Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 01, 2022 7:35:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:35:05.772Z: Cleaning up.
    Feb 01, 2022 7:35:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:35:05.899Z: Stopping worker pool...
    Feb 01, 2022 7:37:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:37:38.000Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 01, 2022 7:37:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T07:37:38.117Z: Worker pool stopped.
    Feb 01, 2022 7:37:44 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-31_23_33_09-14655953042806830612 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d0247f79-1e8c-49df-9a3e-974bec6a69ac and timestamp: 2022-02-01T07:37:44.896000000Z:
                     Metric:                    Value:
                   read_time                     9.217
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 01, 2022 7:37:45 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 54.846 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 40s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/dlop4rvmbi2w6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2982

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2982/display/redirect?page=changes>

Changes:

[daria.malkova] Support SCIO SDK via sbt projects

[samuelw] [BEAM-11648] Share thread pool across RetryManager instances.

[Pablo Estrada] Exclude per-key order tests on Twister2 runner

[Heejong Lee] Fix Java SDK container image name for load-tests and nexmark

[noreply] [BEAM-11936] Enable a few errorprone checks that were broken by pinned

[noreply] [BEAM-13780] Add CONTRIBUTING.md pointing to main guide (#16666)

[noreply] [BEAM-13777] Accept cache capacity as input parameter instead of default

[noreply] [BEAM-13051][A] Enable pylint warnings

[noreply] [BEAM-13779] Fix pr labeling (#16665)

[noreply] Merge pull request #16581 from [BEAM-12164]: Add

[noreply] Fix labeler trigger (#16674)

[noreply] [BEAM-13781] Exclude grpc-netty-shaded from gax-grpc's dependency

[noreply] [BEAM-13051] Fixed pylint warnings : raising-non-exception (E0710),

[noreply] [BEAM-13740] Correctly install go before running tests (#16673)

[noreply] [BEAM-12830] Update local Docker env Go version. (#16670)

[noreply] [BEAM-13051][B] Enable pylint warnings

[noreply] [BEAM-13430] Revert Spark libraries in Spark runner to provided (#16675)

[noreply] [BEAM-12240] Add Java 17 support (#16568)

[noreply] [BEAM-13760] Add random component to default python dataflow job name


------------------------------------------
[...truncated 354.68 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 435a6102abe0742e0fc3bc8dbe43508a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Feb 01, 2022 2:17:36 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Feb 01, 2022 2:17:37 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Feb 01, 2022 2:17:37 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Feb 01, 2022 2:17:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 01, 2022 2:17:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 2:17:41 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 01, 2022 2:17:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 01, 2022 2:17:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 2:17:41 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 01, 2022 2:17:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@741767653]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Feb 01, 2022 2:17:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 01, 2022 2:17:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 2:17:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 01, 2022 2:17:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Feb 01, 2022 2:17:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 2:17:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 01, 2022 2:17:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1460265227]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 01, 2022 2:17:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 01, 2022 2:17:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 2:17:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Feb 01, 2022 2:17:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Feb 01, 2022 2:17:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Feb 01, 2022 2:17:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Feb 01, 2022 2:17:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Feb 01, 2022 2:17:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Feb 01, 2022 2:17:43 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Feb 01, 2022 2:17:48 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Feb 01, 2022 2:17:48 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Feb 01, 2022 2:17:48 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2069638395328058705.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ZyTr6J2d_aqfWCw7uNSWymFUW6aFIeziU1tKPkePZWs.jar
    Feb 01, 2022 2:17:49 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Feb 01, 2022 2:17:49 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Feb 01, 2022 2:17:49 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 89ce8b7c54fa1ef8b8cff3810fe5c2e376002378c4a66a00ab76a972b1dd01fb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ic6LfFT6Hvi4z_OBD-XC43YAI3jEpmoAq3apcrHdAfs.pb
    Feb 01, 2022 2:17:52 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Feb 01, 2022 2:17:52 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Feb 01, 2022 2:17:52 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Feb 01, 2022 2:17:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Feb 01, 2022 2:17:53 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-31_18_17_52-11195199824076262643?project=apache-beam-testing
    Feb 01, 2022 2:17:53 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-31_18_17_52-11195199824076262643
    Feb 01, 2022 2:17:53 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-31_18_17_52-11195199824076262643
    Feb 01, 2022 2:17:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-02-01T02:17:55.946Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Feb 01, 2022 2:18:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:18:06.918Z: Worker configuration: e2-standard-2 in us-central1-a.
    Feb 01, 2022 2:18:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:18:07.821Z: Expanding CoGroupByKey operations into optimizable parts.
    Feb 01, 2022 2:18:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:18:07.862Z: Expanding GroupByKey operations into optimizable parts.
    Feb 01, 2022 2:18:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:18:07.887Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Feb 01, 2022 2:18:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:18:07.975Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Feb 01, 2022 2:18:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:18:07.996Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Feb 01, 2022 2:18:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:18:08.016Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Feb 01, 2022 2:18:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:18:08.458Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 01, 2022 2:18:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:18:08.571Z: Starting 5 workers in us-central1-a...
    Feb 01, 2022 2:18:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:18:19.833Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Feb 01, 2022 2:19:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:19:05.855Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Feb 01, 2022 2:19:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:19:30.204Z: Workers have started successfully.
    Feb 01, 2022 2:19:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:19:30.237Z: Workers have started successfully.
    Feb 01, 2022 2:20:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:19:57.678Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Feb 01, 2022 2:20:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:19:57.854Z: Cleaning up.
    Feb 01, 2022 2:20:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:19:58.039Z: Stopping worker pool...
    Feb 01, 2022 2:22:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:22:37.818Z: Autoscaling: Resized worker pool from 5 to 0.
    Feb 01, 2022 2:22:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-02-01T02:22:37.883Z: Worker pool stopped.
    Feb 01, 2022 2:22:44 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-31_18_17_52-11195199824076262643 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 79855501-6071-40fa-bf0d-16c559aebf43 and timestamp: 2022-02-01T02:22:44.712000000Z:
                     Metric:                    Value:
                   read_time                     7.089
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Feb 01, 2022 2:22:44 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 12.071 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 3s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/pwjx674szwaam

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2981

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2981/display/redirect?page=changes>

Changes:

[daria.malkova] Change executable name fo go tests

[avilovpavel6] Fix java test

[noreply] [BEAM-13769] Skip test_main_session_not_staged_when_using_cloudpickle

[noreply] [BEAM-6744] Support implicitly setting project id in Go Dataflow runner

[noreply] Merge pull request #16493 from [BEAM-13632][Playground] Save catalog

[noreply] Exclude jul-to-slf4j from Spark runner in quickstart POM templates


------------------------------------------
[...truncated 345.60 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is bafc7366a1ea9ffe36afbc2d83b73629
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 43'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 43'
Successfully started process 'Gradle Test Executor 43'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 31, 2022 7:07:18 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 31, 2022 7:07:19 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 31, 2022 7:07:20 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 31, 2022 7:07:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 31, 2022 7:07:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 31, 2022 7:07:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 31, 2022 7:07:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 31, 2022 7:07:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 31, 2022 7:07:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 31, 2022 7:07:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@741767653]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 31, 2022 7:07:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 31, 2022 7:07:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 31, 2022 7:07:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 31, 2022 7:07:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 31, 2022 7:07:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 31, 2022 7:07:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 31, 2022 7:07:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1460265227]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 31, 2022 7:07:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 31, 2022 7:07:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 31, 2022 7:07:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 31, 2022 7:07:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 31, 2022 7:07:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 31, 2022 7:07:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 31, 2022 7:07:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 31, 2022 7:07:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 31, 2022 7:07:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 31, 2022 7:07:31 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 31, 2022 7:07:31 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Jan 31, 2022 7:07:31 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4005912532889220726.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-XyAzY9lnjRS5yEdnlwvDwbnsy0F2o3LTDuIE2jZ6UsY.jar
    Jan 31, 2022 7:07:32 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 31, 2022 7:07:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 31, 2022 7:07:32 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142352 bytes, hash d31d3f480bbd5ff569d776139cfcdbfdf3b241ee6e3d8aa99f4373db1197fd4d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-0x0_SAu9X_Vp13YTnPzb_fOyQe5uPYqpn0Nz2xGX_U0.pb
    Jan 31, 2022 7:07:35 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 31, 2022 7:07:35 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 31, 2022 7:07:35 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 31, 2022 7:07:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 31, 2022 7:07:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-31_11_07_35-1495215892249182485?project=apache-beam-testing
    Jan 31, 2022 7:07:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-31_11_07_35-1495215892249182485
    Jan 31, 2022 7:07:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-31_11_07_35-1495215892249182485
    Jan 31, 2022 7:07:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-31T19:07:38.405Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 31, 2022 7:07:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:07:46.845Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 31, 2022 7:07:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:07:47.513Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 31, 2022 7:07:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:07:47.552Z: Expanding GroupByKey operations into optimizable parts.
    Jan 31, 2022 7:07:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:07:47.590Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 31, 2022 7:07:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:07:47.668Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 31, 2022 7:07:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:07:47.697Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 31, 2022 7:07:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:07:47.728Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 31, 2022 7:07:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:07:48.047Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 31, 2022 7:07:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:07:48.123Z: Starting 5 workers in us-central1-a...
    Jan 31, 2022 7:07:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:07:56.347Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 31, 2022 7:08:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:08:39.987Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 31, 2022 7:09:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:09:04.702Z: Workers have started successfully.
    Jan 31, 2022 7:09:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:09:04.727Z: Workers have started successfully.
    Jan 31, 2022 7:09:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:09:30.996Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 31, 2022 7:09:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:09:31.373Z: Cleaning up.
    Jan 31, 2022 7:09:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:09:31.451Z: Stopping worker pool...
    Jan 31, 2022 7:12:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:12:12.147Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 31, 2022 7:12:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T19:12:12.196Z: Worker pool stopped.
    Jan 31, 2022 7:12:22 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-31_11_07_35-1495215892249182485 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 86e9955c-8f4e-421d-b032-119dd9a369a5 and timestamp: 2022-01-31T19:12:22.116000000Z:
                     Metric:                    Value:
                   read_time                     5.758
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 31, 2022 7:12:22 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 43 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.007 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 5 mins 7.773 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 22s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/e2fljtlmdk6r4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2980

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2980/display/redirect?page=changes>

Changes:

[mrudary] Generalize S3FileSystem to support multiple URI schemes.


------------------------------------------
[...truncated 370.53 KB...]
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFNocTA5U3lwMTViahoCamQaAmly/streams/CAMaAmpkGgJpciCu3qfHAigC': offset 88619 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 31, 2022 1:32:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-31T13:32:09.782Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFNocTA5U3lwMTViahoCamQaAmly/streams/CAQaAmpkGgJpciDA_tdyKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFNocTA5U3lwMTViahoCamQaAmly/streams/CAQaAmpkGgJpciDA_tdyKAI': offset 79877 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFNocTA5U3lwMTViahoCamQaAmly/streams/CAQaAmpkGgJpciDA_tdyKAI': offset 79877 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 31, 2022 1:32:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-31T13:32:09.793Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFNocTA5U3lwMTViahoCamQaAmly/streams/CAEaAmpkGgJpciDD88jHAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFNocTA5U3lwMTViahoCamQaAmly/streams/CAEaAmpkGgJpciDD88jHAygC': offset 100131 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFNocTA5U3lwMTViahoCamQaAmly/streams/CAEaAmpkGgJpciDD88jHAygC': offset 100131 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 31, 2022 1:32:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-31T13:32:11.058Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFNocTA5U3lwMTViahoCamQaAmly/streams/CAcaAmpkGgJpciDr9dqVAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFNocTA5U3lwMTViahoCamQaAmly/streams/CAcaAmpkGgJpciDr9dqVAygC': offset 107731 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFNocTA5U3lwMTViahoCamQaAmly/streams/CAcaAmpkGgJpciDr9dqVAygC': offset 107731 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 31, 2022 1:32:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T13:32:14.106Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 31, 2022 1:32:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T13:32:14.284Z: Cleaning up.
    Jan 31, 2022 1:32:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T13:32:14.362Z: Stopping worker pool...
    Jan 31, 2022 1:34:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T13:34:43.033Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 31, 2022 1:34:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T13:34:43.090Z: Worker pool stopped.
    Jan 31, 2022 1:34:50 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-31_05_30_08-6456080052609460015 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7920abc6-d331-4a16-a3bb-2c332e9d9bd2 and timestamp: 2022-01-31T13:34:50.444000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.373

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 31, 2022 1:34:50 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 5 mins 3.384 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 47s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/nbks27gwn4ptg

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2979

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2979/display/redirect>

Changes:


------------------------------------------
[...truncated 361.92 KB...]
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-30_22_58_33-10146138854569074667
    Jan 31, 2022 6:58:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-31T06:58:36.657Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 31, 2022 6:58:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T06:58:44.077Z: Worker configuration: e2-standard-2 in us-central1-c.
    Jan 31, 2022 6:58:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T06:58:44.793Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 31, 2022 6:58:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T06:58:44.835Z: Expanding GroupByKey operations into optimizable parts.
    Jan 31, 2022 6:58:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T06:58:44.863Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 31, 2022 6:58:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T06:58:44.937Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 31, 2022 6:58:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T06:58:44.966Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 31, 2022 6:58:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T06:58:45.047Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 31, 2022 6:58:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T06:58:45.345Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 31, 2022 6:58:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T06:58:45.437Z: Starting 5 workers in us-central1-c...
    Jan 31, 2022 6:59:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T06:59:08.011Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 31, 2022 6:59:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T06:59:26.067Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 31, 2022 6:59:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T06:59:51.401Z: Workers have started successfully.
    Jan 31, 2022 6:59:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T06:59:51.437Z: Workers have started successfully.
    Jan 31, 2022 7:00:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-31T07:00:19.516Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHdGWmhkcVhGZlZhbhoCamQaAmly/streams/CAkaAmpkGgJpciDfi_wvKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHdGWmhkcVhGZlZhbhoCamQaAmly/streams/CAkaAmpkGgJpciDfi_wvKAI': offset 76593 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHdGWmhkcVhGZlZhbhoCamQaAmly/streams/CAkaAmpkGgJpciDfi_wvKAI': offset 76593 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 31, 2022 7:00:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-31T07:00:19.530Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHdGWmhkcVhGZlZhbhoCamQaAmly/streams/CAMaAmpkGgJpciDQvOmBBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHdGWmhkcVhGZlZhbhoCamQaAmly/streams/CAMaAmpkGgJpciDQvOmBBigC': offset 73077 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHdGWmhkcVhGZlZhbhoCamQaAmly/streams/CAMaAmpkGgJpciDQvOmBBigC': offset 73077 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 31, 2022 7:00:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-31T07:00:19.671Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHdGWmhkcVhGZlZhbhoCamQaAmly/streams/CAIaAmpkGgJpciCWgpzWAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHdGWmhkcVhGZlZhbhoCamQaAmly/streams/CAIaAmpkGgJpciCWgpzWAigC': offset 67309 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHdGWmhkcVhGZlZhbhoCamQaAmly/streams/CAIaAmpkGgJpciCWgpzWAigC': offset 67309 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 31, 2022 7:00:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T07:00:22.733Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 31, 2022 7:00:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T07:00:22.902Z: Cleaning up.
    Jan 31, 2022 7:00:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T07:00:22.970Z: Stopping worker pool...
    Jan 31, 2022 7:02:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T07:02:51.147Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 31, 2022 7:02:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T07:02:51.217Z: Worker pool stopped.
    Jan 31, 2022 7:02:56 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-30_22_58_33-10146138854569074667 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c88c2d2b-2cb3-4189-94ec-4cd76eb8c83f and timestamp: 2022-01-31T07:02:56.946000000Z:
                     Metric:                    Value:
                   read_time                     9.591
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 31, 2022 7:02:57 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 43.86 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/jmmcyjlbbl2mk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2978

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2978/display/redirect>

Changes:


------------------------------------------
[...truncated 374.97 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-30_17_23_21-2842562532500115358
    Jan 31, 2022 1:23:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-31T01:23:25.305Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 31, 2022 1:23:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:23:33.336Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 31, 2022 1:23:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:23:34.079Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 31, 2022 1:23:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:23:34.109Z: Expanding GroupByKey operations into optimizable parts.
    Jan 31, 2022 1:23:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:23:34.142Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 31, 2022 1:23:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:23:34.204Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 31, 2022 1:23:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:23:34.232Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 31, 2022 1:23:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:23:34.264Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 31, 2022 1:23:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:23:34.616Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 31, 2022 1:23:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:23:34.693Z: Starting 5 workers in us-central1-f...
    Jan 31, 2022 1:24:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:24:05.193Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 31, 2022 1:24:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:24:27.078Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 31, 2022 1:24:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:24:52.317Z: Workers have started successfully.
    Jan 31, 2022 1:24:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:24:52.367Z: Workers have started successfully.
    Jan 31, 2022 1:25:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-31T01:25:22.780Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDZ4T1QwcGxITFVRchoCamQaAmly/streams/CAgaAmpkGgJpciD8qpC9AygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDZ4T1QwcGxITFVRchoCamQaAmly/streams/CAgaAmpkGgJpciD8qpC9AygC': offset 64694 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDZ4T1QwcGxITFVRchoCamQaAmly/streams/CAgaAmpkGgJpciD8qpC9AygC': offset 64694 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 31, 2022 1:25:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-31T01:25:23.033Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDZ4T1QwcGxITFVRchoCamQaAmly/streams/CAUaAmpkGgJpciD6jbDEBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDZ4T1QwcGxITFVRchoCamQaAmly/streams/CAUaAmpkGgJpciD6jbDEBygC': offset 117631 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDZ4T1QwcGxITFVRchoCamQaAmly/streams/CAUaAmpkGgJpciD6jbDEBygC': offset 117631 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 31, 2022 1:25:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-31T01:25:24.042Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDZ4T1QwcGxITFVRchoCamQaAmly/streams/CAQaAmpkGgJpciD23ZZgKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDZ4T1QwcGxITFVRchoCamQaAmly/streams/CAQaAmpkGgJpciD23ZZgKAI': offset 108217 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDZ4T1QwcGxITFVRchoCamQaAmly/streams/CAQaAmpkGgJpciD23ZZgKAI': offset 108217 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 31, 2022 1:25:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:25:27.055Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 31, 2022 1:25:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:25:27.236Z: Cleaning up.
    Jan 31, 2022 1:25:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:25:27.308Z: Stopping worker pool...
    Jan 31, 2022 1:27:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:27:46.260Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 31, 2022 1:27:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-31T01:27:46.305Z: Worker pool stopped.
    Jan 31, 2022 1:27:52 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-30_17_23_21-2842562532500115358 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c1725a43-41fe-4491-ad22-4718e41fc0bd and timestamp: 2022-01-31T01:27:52.685000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.707

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 31, 2022 1:27:52 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 15.049 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 46s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/xv3pzykx2dvpe

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2977

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2977/display/redirect>

Changes:


------------------------------------------
[...truncated 369.94 KB...]
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG9wSHdiWmJlVi1mMxoCamQaAmly/streams/CAIaAmpkGgJpciD7qP65AygC': offset 121263 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 30, 2022 7:11:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-30T19:11:03.346Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG9wSHdiWmJlVi1mMxoCamQaAmly/streams/CAkaAmpkGgJpciCh-tzVASgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG9wSHdiWmJlVi1mMxoCamQaAmly/streams/CAkaAmpkGgJpciCh-tzVASgC': offset 116774 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG9wSHdiWmJlVi1mMxoCamQaAmly/streams/CAkaAmpkGgJpciCh-tzVASgC': offset 116774 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 30, 2022 7:11:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-30T19:11:04.529Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG9wSHdiWmJlVi1mMxoCamQaAmly/streams/CAQaAmpkGgJpciCm-8ZMKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG9wSHdiWmJlVi1mMxoCamQaAmly/streams/CAQaAmpkGgJpciCm-8ZMKAI': offset 96929 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG9wSHdiWmJlVi1mMxoCamQaAmly/streams/CAQaAmpkGgJpciCm-8ZMKAI': offset 96929 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 30, 2022 7:11:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-30T19:11:04.647Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG9wSHdiWmJlVi1mMxoCamQaAmly/streams/CAEaAmpkGgJpciDgxcurAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG9wSHdiWmJlVi1mMxoCamQaAmly/streams/CAEaAmpkGgJpciDgxcurAigC': offset 89091 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG9wSHdiWmJlVi1mMxoCamQaAmly/streams/CAEaAmpkGgJpciDgxcurAigC': offset 89091 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 30, 2022 7:11:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T19:11:08.068Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 30, 2022 7:11:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T19:11:08.222Z: Cleaning up.
    Jan 30, 2022 7:11:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T19:11:08.298Z: Stopping worker pool...
    Jan 30, 2022 7:13:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T19:13:32.566Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 30, 2022 7:13:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T19:13:32.619Z: Worker pool stopped.
    Jan 30, 2022 7:13:38 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-30_11_09_11-16751825671459497902 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 655b39bf-f5f1-4d9d-b870-0da9328006e9 and timestamp: 2022-01-30T19:13:38.117000000Z:
                     Metric:                    Value:
                   read_time                    11.397
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 30, 2022 7:13:38 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 46.302 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 34s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/zmv3jtvbn66xu

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2976

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2976/display/redirect>

Changes:


------------------------------------------
[...truncated 357.18 KB...]
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 30, 2022 1:14:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 30, 2022 1:14:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 30, 2022 1:14:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 30, 2022 1:14:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 30, 2022 1:14:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 30, 2022 1:14:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 30, 2022 1:14:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 30, 2022 1:14:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 30, 2022 1:15:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 30, 2022 1:15:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 30, 2022 1:15:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Jan 30, 2022 1:15:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test420534605500439837.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-PAi5CJTQH4oA6Thn0Z1W5ISn1kVhL4-iPCb-Vzh48bg.jar
    Jan 30, 2022 1:15:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 30, 2022 1:15:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 30, 2022 1:15:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 160c8de698fbc6eb45a410063f8b666b895a44d1b057fd8bac5e685da4f72867> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-FgyN5pj7xutFpBAGP4tma4laRNGwV_2LrF5oXaT3KGc.pb
    Jan 30, 2022 1:15:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 30, 2022 1:15:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 30, 2022 1:15:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 30, 2022 1:15:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 30, 2022 1:15:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-30_05_15_08-3583709776279722590?project=apache-beam-testing
    Jan 30, 2022 1:15:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-30_05_15_08-3583709776279722590
    Jan 30, 2022 1:15:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-30_05_15_08-3583709776279722590
    Jan 30, 2022 1:15:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-30T13:15:20.358Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 30, 2022 1:15:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:15:30.446Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 30, 2022 1:15:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:15:31.310Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 30, 2022 1:15:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:15:31.340Z: Expanding GroupByKey operations into optimizable parts.
    Jan 30, 2022 1:15:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:15:31.364Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 30, 2022 1:15:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:15:31.434Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 30, 2022 1:15:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:15:31.477Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 30, 2022 1:15:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:15:31.503Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 30, 2022 1:15:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:15:31.816Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 30, 2022 1:15:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:15:31.869Z: Starting 5 workers in us-central1-f...
    Jan 30, 2022 1:15:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:15:48.381Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 30, 2022 1:16:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:16:19.225Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 30, 2022 1:16:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:16:44.431Z: Workers have started successfully.
    Jan 30, 2022 1:16:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:16:44.471Z: Workers have started successfully.
    Jan 30, 2022 1:17:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-30T13:17:13.690Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGJJcTVqUVNzaU9FWhoCamQaAmly/streams/CAIaAmpkGgJpciDS--J9KAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGJJcTVqUVNzaU9FWhoCamQaAmly/streams/CAIaAmpkGgJpciDS--J9KAI': offset 70714 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGJJcTVqUVNzaU9FWhoCamQaAmly/streams/CAIaAmpkGgJpciDS--J9KAI': offset 70714 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 30, 2022 1:17:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-30T13:17:13.772Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGJJcTVqUVNzaU9FWhoCamQaAmly/streams/CAUaAmpkGgJpciDNwqz1BigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGJJcTVqUVNzaU9FWhoCamQaAmly/streams/CAUaAmpkGgJpciDNwqz1BigC': offset 124563 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGJJcTVqUVNzaU9FWhoCamQaAmly/streams/CAUaAmpkGgJpciDNwqz1BigC': offset 124563 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 30, 2022 1:17:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:17:17.136Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 30, 2022 1:17:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:17:17.281Z: Cleaning up.
    Jan 30, 2022 1:17:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:17:17.341Z: Stopping worker pool...
    Jan 30, 2022 1:19:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:19:42.774Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 30, 2022 1:19:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T13:19:42.822Z: Worker pool stopped.
    Jan 30, 2022 1:19:48 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-30_05_15_08-3583709776279722590 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f24fdb1a-fed6-483e-b3af-32db048c5d14 and timestamp: 2022-01-30T13:19:48.571000000Z:
                     Metric:                    Value:
                   read_time                     9.778
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 30, 2022 1:19:48 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 0.081 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/dygmvdsjdgica

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2975

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2975/display/redirect>

Changes:


------------------------------------------
[...truncated 361.80 KB...]
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 30, 2022 7:12:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 30, 2022 7:12:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 30, 2022 7:12:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 30, 2022 7:12:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 30, 2022 7:12:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 30, 2022 7:12:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 30, 2022 7:12:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 30, 2022 7:12:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 30, 2022 7:12:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 30, 2022 7:12:12 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 30, 2022 7:12:12 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Jan 30, 2022 7:12:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test998462881179239334.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-5-GreDFS6V5q89ntAvKDJOX90zxwp26q5xWwDoRUBNA.jar
    Jan 30, 2022 7:12:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 30, 2022 7:12:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 30, 2022 7:12:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142352 bytes, hash 171545eb37fc70aa71c74dae7c31bf8daffd271afcf8784e0882553119aaec86> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-FxVF6zf8cKpxx02ufDG_ja_9Jxr8-HhOCIJVMRmq7IY.pb
    Jan 30, 2022 7:12:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 30, 2022 7:12:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 30, 2022 7:12:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 30, 2022 7:12:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 30, 2022 7:12:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-29_23_12_17-10880699358932839449?project=apache-beam-testing
    Jan 30, 2022 7:12:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-29_23_12_17-10880699358932839449
    Jan 30, 2022 7:12:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-29_23_12_17-10880699358932839449
    Jan 30, 2022 7:12:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-30T07:12:23.490Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 30, 2022 7:12:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:12:30.223Z: Worker configuration: e2-standard-2 in us-central1-c.
    Jan 30, 2022 7:12:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:12:30.937Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 30, 2022 7:12:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:12:30.976Z: Expanding GroupByKey operations into optimizable parts.
    Jan 30, 2022 7:12:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:12:31.003Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 30, 2022 7:12:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:12:31.069Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 30, 2022 7:12:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:12:31.093Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 30, 2022 7:12:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:12:31.127Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 30, 2022 7:12:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:12:31.441Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 30, 2022 7:12:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:12:31.510Z: Starting 5 workers in us-central1-c...
    Jan 30, 2022 7:12:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:12:39.470Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 30, 2022 7:13:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:13:15.675Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 30, 2022 7:13:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:13:40.038Z: Workers have started successfully.
    Jan 30, 2022 7:13:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:13:40.065Z: Workers have started successfully.
    Jan 30, 2022 7:14:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-30T07:14:10.386Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHF5Y2ZpLW5wYTl1bRoCamQaAmly/streams/CAQaAmpkGgJpciCVm--jAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHF5Y2ZpLW5wYTl1bRoCamQaAmly/streams/CAQaAmpkGgJpciCVm--jAigC': offset 94202 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHF5Y2ZpLW5wYTl1bRoCamQaAmly/streams/CAQaAmpkGgJpciCVm--jAigC': offset 94202 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 30, 2022 7:14:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-30T07:14:10.457Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHF5Y2ZpLW5wYTl1bRoCamQaAmly/streams/GgJqZBoCaXIglPDrggcoAg"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHF5Y2ZpLW5wYTl1bRoCamQaAmly/streams/GgJqZBoCaXIglPDrggcoAg': offset 94075 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHF5Y2ZpLW5wYTl1bRoCamQaAmly/streams/GgJqZBoCaXIglPDrggcoAg': offset 94075 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 30, 2022 7:14:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:14:12.472Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 30, 2022 7:14:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:14:12.608Z: Cleaning up.
    Jan 30, 2022 7:14:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:14:12.663Z: Stopping worker pool...
    Jan 30, 2022 7:16:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:16:32.402Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 30, 2022 7:16:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T07:16:32.449Z: Worker pool stopped.
    Jan 30, 2022 7:16:37 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-29_23_12_17-10880699358932839449 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b3e49536-b6ae-4cc6-b43e-51abcfd5d3e7 and timestamp: 2022-01-30T07:16:37.976000000Z:
                     Metric:                    Value:
                   read_time                     8.761
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 30, 2022 7:16:38 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.079 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.114 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 43.785 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 26s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/yrywdv2atnzgm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2974

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2974/display/redirect>

Changes:


------------------------------------------
[...truncated 362.89 KB...]
    Jan 30, 2022 1:20:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 30, 2022 1:20:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 30, 2022 1:20:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 30, 2022 1:20:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 30, 2022 1:20:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 30, 2022 1:20:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 30, 2022 1:20:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 30, 2022 1:20:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 30, 2022 1:20:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 30, 2022 1:21:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 30, 2022 1:21:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Jan 30, 2022 1:21:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6137646273783143206.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ky6XE87EM3qC-kMSEZGS2I8johIhW39sDATzkTuI8cw.jar
    Jan 30, 2022 1:21:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 1 seconds
    Jan 30, 2022 1:21:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 30, 2022 1:21:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash cf59760ce032ae1492d648ad47eaae797894b74656e1f4a0d4e4d2b5ddfd400d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-z1l2DOAyrhSS1kitR-queXiUt0ZW4fSg1OTStd39QA0.pb
    Jan 30, 2022 1:21:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 30, 2022 1:21:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 30, 2022 1:21:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 30, 2022 1:21:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 30, 2022 1:21:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-29_17_21_07-16646424767662681416?project=apache-beam-testing
    Jan 30, 2022 1:21:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-29_17_21_07-16646424767662681416
    Jan 30, 2022 1:21:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-29_17_21_07-16646424767662681416
    Jan 30, 2022 1:21:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-30T01:21:12.332Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 30, 2022 1:21:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:21:22.540Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 30, 2022 1:21:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:21:23.199Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 30, 2022 1:21:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:21:23.243Z: Expanding GroupByKey operations into optimizable parts.
    Jan 30, 2022 1:21:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:21:23.275Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 30, 2022 1:21:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:21:23.348Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 30, 2022 1:21:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:21:23.374Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 30, 2022 1:21:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:21:23.405Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 30, 2022 1:21:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:21:23.817Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 30, 2022 1:21:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:21:23.893Z: Starting 5 workers in us-central1-f...
    Jan 30, 2022 1:21:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:21:36.924Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 30, 2022 1:22:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:22:10.541Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 30, 2022 1:22:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:22:37.679Z: Workers have started successfully.
    Jan 30, 2022 1:22:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:22:37.716Z: Workers have started successfully.
    Jan 30, 2022 1:23:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-30T01:23:05.661Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG45QXZfSlZIaElEUhoCamQaAmly/streams/CAUaAmpkGgJpciCJuaDnBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG45QXZfSlZIaElEUhoCamQaAmly/streams/CAUaAmpkGgJpciCJuaDnBygC': offset 86908 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG45QXZfSlZIaElEUhoCamQaAmly/streams/CAUaAmpkGgJpciCJuaDnBygC': offset 86908 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 30, 2022 1:23:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-30T01:23:05.674Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDG45QXZfSlZIaElEUhoCamQaAmly/streams/CAgaAmpkGgJpciDRw77KBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG45QXZfSlZIaElEUhoCamQaAmly/streams/CAgaAmpkGgJpciDRw77KBigC': offset 93528 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDG45QXZfSlZIaElEUhoCamQaAmly/streams/CAgaAmpkGgJpciDRw77KBigC': offset 93528 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 30, 2022 1:23:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:23:09.746Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 30, 2022 1:23:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:23:09.905Z: Cleaning up.
    Jan 30, 2022 1:23:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:23:09.976Z: Stopping worker pool...
    Jan 30, 2022 1:25:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:25:32.988Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 30, 2022 1:25:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-30T01:25:33.036Z: Worker pool stopped.
    Jan 30, 2022 1:25:39 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-29_17_21_07-16646424767662681416 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): dd4335a9-b539-4a9d-9aec-55d2b9dbda6b and timestamp: 2022-01-30T01:25:39.461000000Z:
                     Metric:                    Value:
                   read_time                     7.192
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 30, 2022 1:25:39 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 53.968 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 49s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/fgtvobul5rclu

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2973

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2973/display/redirect>

Changes:


------------------------------------------
[...truncated 361.99 KB...]
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-29_11_15_25-685094418505829750
    Jan 29, 2022 7:15:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-29T19:15:28.466Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 29, 2022 7:15:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:15:42.736Z: Worker configuration: e2-standard-2 in us-central1-c.
    Jan 29, 2022 7:15:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:15:43.531Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 29, 2022 7:15:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:15:43.557Z: Expanding GroupByKey operations into optimizable parts.
    Jan 29, 2022 7:15:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:15:43.585Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 29, 2022 7:15:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:15:43.654Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 29, 2022 7:15:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:15:43.682Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 29, 2022 7:15:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:15:43.715Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 29, 2022 7:15:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:15:44.161Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 29, 2022 7:15:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:15:44.450Z: Starting 5 workers in us-central1-c...
    Jan 29, 2022 7:16:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:16:08.625Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 29, 2022 7:16:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:16:29.351Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 29, 2022 7:16:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:16:57.740Z: Workers have started successfully.
    Jan 29, 2022 7:16:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:16:57.770Z: Workers have started successfully.
    Jan 29, 2022 7:17:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-29T19:17:27.535Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDJhczVWbnB4cjgySxoCamQaAmly/streams/CAYaAmpkGgJpciDY1LOnBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDJhczVWbnB4cjgySxoCamQaAmly/streams/CAYaAmpkGgJpciDY1LOnBSgC': offset 123656 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDJhczVWbnB4cjgySxoCamQaAmly/streams/CAYaAmpkGgJpciDY1LOnBSgC': offset 123656 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 29, 2022 7:17:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-29T19:17:28.219Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDJhczVWbnB4cjgySxoCamQaAmly/streams/CAUaAmpkGgJpciDo1KrlBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDJhczVWbnB4cjgySxoCamQaAmly/streams/CAUaAmpkGgJpciDo1KrlBCgC': offset 83791 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDJhczVWbnB4cjgySxoCamQaAmly/streams/CAUaAmpkGgJpciDo1KrlBCgC': offset 83791 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 29, 2022 7:17:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-29T19:17:28.543Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDJhczVWbnB4cjgySxoCamQaAmly/streams/CAQaAmpkGgJpciDBguK4AygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDJhczVWbnB4cjgySxoCamQaAmly/streams/CAQaAmpkGgJpciDBguK4AygC': offset 117734 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDJhczVWbnB4cjgySxoCamQaAmly/streams/CAQaAmpkGgJpciDBguK4AygC': offset 117734 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 29, 2022 7:17:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:17:31.387Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 29, 2022 7:17:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:17:31.559Z: Cleaning up.
    Jan 29, 2022 7:17:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:17:31.629Z: Stopping worker pool...
    Jan 29, 2022 7:20:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:20:02.891Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 29, 2022 7:20:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T19:20:02.938Z: Worker pool stopped.
    Jan 29, 2022 7:20:09 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-29_11_15_25-685094418505829750 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a636d7ca-3317-4aa1-9aba-f52b2fa28704 and timestamp: 2022-01-29T19:20:09.232000000Z:
                     Metric:                    Value:
                   read_time                    10.661
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 29, 2022 7:20:09 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 5 mins 2.709 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/larbbbuk7v2ck

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2972

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2972/display/redirect>

Changes:


------------------------------------------
[...truncated 359.81 KB...]
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 29, 2022 1:40:47 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 29, 2022 1:40:48 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 29, 2022 1:40:49 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 29, 2022 1:40:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 29, 2022 1:40:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 1:40:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 29, 2022 1:40:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 29, 2022 1:40:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 1:40:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 29, 2022 1:40:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@741767653]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 29, 2022 1:40:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 29, 2022 1:40:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 1:40:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 29, 2022 1:40:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 29, 2022 1:40:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 1:40:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 29, 2022 1:40:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1460265227]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 29, 2022 1:40:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 29, 2022 1:40:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 1:40:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 29, 2022 1:40:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 29, 2022 1:40:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 1:40:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 29, 2022 1:40:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 29, 2022 1:40:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 29, 2022 1:40:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 29, 2022 1:40:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 29, 2022 1:40:59 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Jan 29, 2022 1:40:59 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2685668765094644906.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-VSPbQPIuq8S7FZzuS0buda2BCJGLyTsh-jSZNsqzxXE.jar
    Jan 29, 2022 1:40:59 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigtable-emulator/0.137.1/b1639aa134de1302e43d9e9c3843f6ff853c510f/google-cloud-bigtable-emulator-0.137.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigtable-emulator-0.137.1-V2SlLU4BjIGf5QoF7YL-A-QNjw49NdXrNM4QoX9MXSI.jar
    Jan 29, 2022 1:40:59 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mongodb/mongo-java-driver/3.12.7/1f45c6a397feeb46da75425619333d1cc6f90f78/mongo-java-driver-3.12.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mongo-java-driver-3.12.7-D_zgBJWNb9mzmuetJ37a0X9XtpcfSGsXYpxe6eE8Tao.jar
    Jan 29, 2022 1:41:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 359 files cached, 3 files newly uploaded in 2 seconds
    Jan 29, 2022 1:41:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 29, 2022 1:41:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 321c73cae5ba1c6513574428a909170b931d9866c0485c7cb5764598c400c015> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-MhxzyuW6HGUTV0QoqQkXC5MdmGbASFx8tXZFmMQAwBU.pb
    Jan 29, 2022 1:41:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 29, 2022 1:41:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 29, 2022 1:41:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 29, 2022 1:41:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 29, 2022 1:41:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-29_05_41_05-11089245651626094404?project=apache-beam-testing
    Jan 29, 2022 1:41:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-29_05_41_05-11089245651626094404
    Jan 29, 2022 1:41:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-29_05_41_05-11089245651626094404
    Jan 29, 2022 1:41:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-29T13:41:09.097Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 29, 2022 1:41:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:41:16.970Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 29, 2022 1:41:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:41:17.801Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 29, 2022 1:41:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:41:17.839Z: Expanding GroupByKey operations into optimizable parts.
    Jan 29, 2022 1:41:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:41:17.878Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 29, 2022 1:41:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:41:17.952Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 29, 2022 1:41:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:41:17.980Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 29, 2022 1:41:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:41:18.014Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 29, 2022 1:41:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:41:18.358Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 29, 2022 1:41:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:41:18.430Z: Starting 5 workers in us-central1-a...
    Jan 29, 2022 1:41:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:41:39.521Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 29, 2022 1:42:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:42:15.190Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 29, 2022 1:42:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:42:42.549Z: Workers have started successfully.
    Jan 29, 2022 1:42:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:42:42.584Z: Workers have started successfully.
    Jan 29, 2022 1:43:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:43:13.273Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 29, 2022 1:43:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:43:13.434Z: Cleaning up.
    Jan 29, 2022 1:43:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:43:13.517Z: Stopping worker pool...
    Jan 29, 2022 1:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:45:49.822Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 29, 2022 1:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T13:45:49.863Z: Worker pool stopped.
    Jan 29, 2022 1:45:57 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-29_05_41_05-11089245651626094404 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a0833f6d-8156-423e-9bba-eb3a1582afca and timestamp: 2022-01-29T13:45:57.330000000Z:
                     Metric:                    Value:
                   read_time                     7.136
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 29, 2022 1:45:57 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 14.397 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 15s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/mdqjtegokdj3g

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2971

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2971/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13293] XLang Jdbc IO for Go SDK (#16111)

[noreply] [BEAM-10206] Add Go Vet to Github Actions (#16612)


------------------------------------------
[...truncated 357.36 KB...]
Successfully started process 'Gradle Test Executor 3'

Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 29, 2022 7:25:53 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 29, 2022 7:25:54 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 29, 2022 7:25:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 29, 2022 7:25:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 29, 2022 7:25:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 7:25:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 29, 2022 7:25:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 29, 2022 7:25:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 7:25:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 29, 2022 7:25:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@741767653]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 29, 2022 7:25:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 29, 2022 7:25:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 7:26:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 29, 2022 7:26:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 29, 2022 7:26:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 7:26:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 29, 2022 7:26:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1460265227]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 29, 2022 7:26:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 29, 2022 7:26:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 7:26:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 29, 2022 7:26:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 29, 2022 7:26:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 7:26:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 29, 2022 7:26:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 29, 2022 7:26:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 29, 2022 7:26:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 29, 2022 7:26:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 29, 2022 7:26:04 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Jan 29, 2022 7:26:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2902130502072884892.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-KS8zHApyImUe90lh6F2E4AMpJhFlXP-OJD_Dd2djKCs.jar
    Jan 29, 2022 7:26:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-3lKbW7C4RFg9eW9EPaq1o0QUoxVIyr04ahoCt3DjPRM.jar
    Jan 29, 2022 7:26:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 360 files cached, 2 files newly uploaded in 0 seconds
    Jan 29, 2022 7:26:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 29, 2022 7:26:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142350 bytes, hash 782a7d3d2a71f083e4e9b88a321c394ff8148b7474c3e700c98c87c920cb03b7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-eCp9PSpx8IPk6biKMhw5T_gUi3R0w-cAyYyHySDLA7c.pb
    Jan 29, 2022 7:26:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 29, 2022 7:26:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 29, 2022 7:26:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 29, 2022 7:26:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 29, 2022 7:26:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-28_23_26_09-14525866684109903359?project=apache-beam-testing
    Jan 29, 2022 7:26:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-28_23_26_09-14525866684109903359
    Jan 29, 2022 7:26:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-28_23_26_09-14525866684109903359
    Jan 29, 2022 7:26:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-29T07:26:15.054Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 29, 2022 7:26:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:26:22.189Z: Worker configuration: e2-standard-2 in us-central1-b.
    Jan 29, 2022 7:26:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:26:22.935Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 29, 2022 7:26:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:26:22.978Z: Expanding GroupByKey operations into optimizable parts.
    Jan 29, 2022 7:26:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:26:23.004Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 29, 2022 7:26:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:26:23.080Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 29, 2022 7:26:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:26:23.113Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 29, 2022 7:26:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:26:23.140Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 29, 2022 7:26:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:26:23.476Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 29, 2022 7:26:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:26:23.573Z: Starting 5 workers in us-central1-b...
    Jan 29, 2022 7:26:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:26:40.969Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 29, 2022 7:27:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:27:09.046Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 29, 2022 7:27:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:27:34.826Z: Workers have started successfully.
    Jan 29, 2022 7:27:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:27:34.847Z: Workers have started successfully.
    Jan 29, 2022 7:28:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:28:04.189Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 29, 2022 7:28:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:28:04.355Z: Cleaning up.
    Jan 29, 2022 7:28:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:28:04.434Z: Stopping worker pool...
    Jan 29, 2022 7:30:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:30:41.622Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 29, 2022 7:30:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T07:30:41.675Z: Worker pool stopped.
    Jan 29, 2022 7:30:48 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-28_23_26_09-14525866684109903359 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6191619a-d816-424e-bfec-7f3b3f593448 and timestamp: 2022-01-29T07:30:48.924000000Z:
                     Metric:                    Value:
                   read_time                      7.96
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 29, 2022 7:30:49 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 59.583 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 55s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/fwh7cdrvz5fgg

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2970

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2970/display/redirect?page=changes>

Changes:

[Kyle Weaver] [BEAM-13751] Don't block on gcloud when attempting to get default GCP

[Kyle Weaver] [BEAM-13751] Parameterize wait timeout so test doesn't waste 2s.

[Kyle Weaver] [BEAM-13751] Add comment explaining sleep.


------------------------------------------
[...truncated 349.90 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is bafc7366a1ea9ffe36afbc2d83b73629
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 29, 2022 12:46:48 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 29, 2022 12:46:49 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 29, 2022 12:46:49 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 29, 2022 12:46:52 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 29, 2022 12:46:52 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 12:46:53 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 29, 2022 12:46:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 29, 2022 12:46:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 12:46:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 29, 2022 12:46:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 29, 2022 12:46:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 29, 2022 12:46:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 12:46:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 29, 2022 12:46:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 29, 2022 12:46:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 12:46:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 29, 2022 12:46:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@715099463]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 29, 2022 12:46:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 29, 2022 12:46:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 12:46:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 29, 2022 12:46:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 29, 2022 12:46:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 29, 2022 12:46:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 29, 2022 12:46:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 29, 2022 12:46:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 29, 2022 12:46:56 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 29, 2022 12:47:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 29, 2022 12:47:00 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Jan 29, 2022 12:47:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7373039645645146991.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-OzGQtatGKxTLvkMC4Ml5dBif7rT8ucDn3cNAct-D36s.jar
    Jan 29, 2022 12:47:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 29, 2022 12:47:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 29, 2022 12:47:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 8dfae214ce68a24d6f2a87bde6bd7307f301cce088d0d7ecc8818c5766489822> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-jfriFM5ook1vKoe95r1zB_MBzOCI0NfsyIGMV2ZImCI.pb
    Jan 29, 2022 12:47:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 29, 2022 12:47:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 29, 2022 12:47:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 29, 2022 12:47:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 29, 2022 12:47:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-28_16_47_05-13511352317478125918?project=apache-beam-testing
    Jan 29, 2022 12:47:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-28_16_47_05-13511352317478125918
    Jan 29, 2022 12:47:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-28_16_47_05-13511352317478125918
    Jan 29, 2022 12:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-29T00:47:09.811Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 29, 2022 12:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:47:22.195Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 29, 2022 12:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:47:22.903Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 29, 2022 12:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:47:22.953Z: Expanding GroupByKey operations into optimizable parts.
    Jan 29, 2022 12:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:47:22.999Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 29, 2022 12:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:47:23.091Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 29, 2022 12:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:47:23.138Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 29, 2022 12:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:47:23.171Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 29, 2022 12:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:47:23.556Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 29, 2022 12:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:47:23.638Z: Starting 5 workers in us-central1-a...
    Jan 29, 2022 12:47:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:47:48.181Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 29, 2022 12:48:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:48:17.986Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 29, 2022 12:48:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:48:41.992Z: Workers have started successfully.
    Jan 29, 2022 12:48:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:48:42.019Z: Workers have started successfully.
    Jan 29, 2022 12:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:49:10.652Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 29, 2022 12:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:49:10.792Z: Cleaning up.
    Jan 29, 2022 12:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:49:10.867Z: Stopping worker pool...
    Jan 29, 2022 12:51:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:51:43.851Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 29, 2022 12:51:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-29T00:51:43.895Z: Worker pool stopped.
    Jan 29, 2022 12:51:51 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-28_16_47_05-13511352317478125918 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 745b683f-c6bf-4e88-9ce7-81b53a537825 and timestamp: 2022-01-29T00:51:51.773000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.046

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 29, 2022 12:51:51 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 5 mins 7.365 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 58s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/p26onbaogqbku

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2969

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2969/display/redirect?page=changes>

Changes:

[noreply] Update Python SDK beam-master tags (#16630)

[noreply] Merge pull request #16592 from [BEAM-13722][Playground] Add precompiling

[noreply] Merge pull request #16505 from [BEAM-13527] [Playground] Pipeline


------------------------------------------
[...truncated 365.66 KB...]
    Jan 28, 2022 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 28, 2022 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 28, 2022 6:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 28, 2022 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 28, 2022 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 28, 2022 6:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 28, 2022 6:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 28, 2022 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 28, 2022 6:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 28, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 28, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Jan 28, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8833349601382604326.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-rxK_M41LDjoOvWMh52pE7p02A2Uxh6V14p1lvjGtD3M.jar
    Jan 28, 2022 6:45:32 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 28, 2022 6:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 28, 2022 6:45:32 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash d0e773f16ff7e96c2ce6ff6b48531d8e00e1b54c03c6ae495c680605f43500f2> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-0Odz8W_36Wws5v9rSFMdjgDhtUwDxq5JXGgGBfQ1API.pb
    Jan 28, 2022 6:45:35 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 28, 2022 6:45:35 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 28, 2022 6:45:35 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 28, 2022 6:45:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 28, 2022 6:45:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-28_10_45_36-12881403229638943231?project=apache-beam-testing
    Jan 28, 2022 6:45:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-28_10_45_36-12881403229638943231
    Jan 28, 2022 6:45:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-28_10_45_36-12881403229638943231
    Jan 28, 2022 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-28T18:45:39.400Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 28, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:45:47.769Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 28, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:45:48.452Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 28, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:45:48.482Z: Expanding GroupByKey operations into optimizable parts.
    Jan 28, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:45:48.513Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 28, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:45:48.566Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 28, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:45:48.590Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 28, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:45:48.623Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 28, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:45:48.959Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 28, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:45:49.051Z: Starting 5 workers in us-central1-f...
    Jan 28, 2022 6:46:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:46:09.101Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 28, 2022 6:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:46:33.325Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 28, 2022 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:46:58.075Z: Workers have started successfully.
    Jan 28, 2022 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:46:58.122Z: Workers have started successfully.
    Jan 28, 2022 6:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-28T18:47:27.626Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGhLc2h1SHl6aTRZQRoCamQaAmly/streams/CAcaAmpkGgJpciCSvLemAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGhLc2h1SHl6aTRZQRoCamQaAmly/streams/CAcaAmpkGgJpciCSvLemAygC': offset 73992 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGhLc2h1SHl6aTRZQRoCamQaAmly/streams/CAcaAmpkGgJpciCSvLemAygC': offset 73992 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 28, 2022 6:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-28T18:47:27.631Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGhLc2h1SHl6aTRZQRoCamQaAmly/streams/CAYaAmpkGgJpciDe_OjcBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGhLc2h1SHl6aTRZQRoCamQaAmly/streams/CAYaAmpkGgJpciDe_OjcBigC': offset 89989 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGhLc2h1SHl6aTRZQRoCamQaAmly/streams/CAYaAmpkGgJpciDe_OjcBigC': offset 89989 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 28, 2022 6:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:47:29.860Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 28, 2022 6:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:47:30.059Z: Cleaning up.
    Jan 28, 2022 6:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:47:30.137Z: Stopping worker pool...
    Jan 28, 2022 6:50:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:49:58.617Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 28, 2022 6:50:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T18:49:58.668Z: Worker pool stopped.
    Jan 28, 2022 6:50:06 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-28_10_45_36-12881403229638943231 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c294db2f-5621-418c-93ac-1185f4a1f83e and timestamp: 2022-01-28T18:50:06.377000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.687

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 28, 2022 6:50:06 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 53.028 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 46s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/37qxairwyopmo

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2968

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2968/display/redirect>

Changes:


------------------------------------------
[...truncated 387.25 KB...]
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.2/8ad72fe39fa8c91eaaf12aadb21e0c3661fe26d5/commons-collections-3.2.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-collections-3.2.2-7urpF5FxRKaKdB1MDf9mqlxcX9hVk_8he87T_Iyng7g.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mortbay.jetty/jetty/6.1.26/2f546e289fddd5b1fab1d4199fbb6e9ef43ee4b0/jetty-6.1.26.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jetty-6.1.26-IQkdOpwTSfZA_cQhUEpgTAQO2JCH7MEq--MjUzJu1OU.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mortbay.jetty/jetty-util/6.1.26/e5642fe0399814e1687d55a3862aa5a3417226a9/jetty-util-6.1.26.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jetty-util-6.1.26-m5dM4rmfSCVLdhJjN9xFshIm84Oq7WFvWXgK2vFnwEc.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-net/commons-net/3.1/2298164a7c2484406f2aa5ac85b205d39019896f/commons-net-3.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-net-3.1-NKWNbYClB0gwfmdOwntEEeZTb9EueL7EKOsu5JoSMAc.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.jersey/jersey-server/1.9/3a6ea7cc5e15c824953f9f3ece2201b634d90d18/jersey-server-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jersey-server-1.9-Pe2RsZgHdWG9UfbARCyc1wt1TYsxthr69Ei9qdAYSPA.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.servlet/servlet-api/2.5/5959582d97d8b61f4d154ca9e495aafd16726e34/servlet-api-2.5.jar to gs://temp-storage-for-perf-tests/loadtests/staging/servlet-api-2.5-xljqNgpw-u6ttm-zyQpwLkFCoKt3aPmumChnjg2a1Nw.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.jersey/jersey-core/1.9/8341846f18187013bb9e27e46b7ee00a6395daf4/jersey-core-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jersey-core-1.9-LG0OyI_Iw2y0FjfZwA0GmMIstrahN_pSbveC4A0iZbw.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.jersey.contribs/jersey-guice/1.9/5963c28c47df7e5d6ad34cec80c071c368777f7b/jersey-guice-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jersey-guice-1.9-VE_JLSYlMyqajuqnpydM8a9nA5NqUK-oDZKnggCn3jQ.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.jersey/jersey-client/1.9/d3c4b2b5f89db32c96ceddcb863684821910a7bb/jersey-client-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jersey-client-1.9-iuA68NBsRqUbZdEj7EDyRdppCZGqNmnO9HZ9uPNvvmg.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mortbay.jetty/jetty-sslengine/6.1.26/60367999cee49a3b09fa86bdcb52310b6c896014/jetty-sslengine-6.1.26.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jetty-sslengine-6.1.26-nF9rsWi6AbldJQtX8GHICU4c6cia5OdzSSussXGS6oc.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/log4j/log4j/1.2.17/5af35056b4d257e4b64b9e8069c0746e8b08629f/log4j-1.2.17.jar to gs://temp-storage-for-perf-tests/loadtests/staging/log4j-1.2.17-HTFpZEVpdyBScJF1Q2kIKmZRvUl4G2AF3rlOVnU0Bvk.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.jcraft/jsch/0.1.55/bbd40e5aa7aa3cfad5db34965456cee738a42a50/jsch-0.1.55.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jsch-0.1.55-1JKxWm0uo_HMOcQiyVPEDBIokHPb6DYNmMD2-ex0_EQ.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.htrace/htrace-core4/4.1.0-incubating/12b3e2adda95e8c41d9d45d33db075137871d2e2/htrace-core4-4.1.0-incubating.jar to gs://temp-storage-for-perf-tests/loadtests/staging/htrace-core4-4.1.0-incubating-XUW3kEhXw-StNrO8xXvi0sXzCMabX2pYvYaqfUiiXvY.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.xml.bind/jaxb-api/2.2.2/aeb3021ca93dde265796d82015beecdcff95bf09/jaxb-api-2.2.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jaxb-api-2.2.2-MCM99iFfuYLYeE3pHTB1lnSM6pjW1QIpPHw-hcFpcTc.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty/3.10.6.Final/18ed04a0e502896552854926e908509db2987a00/netty-3.10.6.Final.jar to gs://temp-storage-for-perf-tests/loadtests/staging/netty-3.10.6.Final-h2ilD749k6iNjmAA6l1o4w9Q3JFbN2TDxYcPcMT7O0k.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.visible-assertions/visible-assertions/2.1.2/20d31a578030ec8e941888537267d3123c2ad1c1/visible-assertions-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/visible-assertions-2.1.2-RQSulosjfNzcto_1sHqmOr5JkvkHp3w9YSCqm5BBQBw.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.duct-tape/duct-tape/1.0.8/92edc22a9ab2f3e17c9bf700aaee377d50e8b530/duct-tape-1.0.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/duct-tape-1.0.8-Mc7xLd7JedH4bXz3CMQaF9pSPQXGhf1mQunQsq3bckA.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna-platform/4.0.0/deb6bf66918989b50209b8c9aaf3b2561af7f011/jna-platform-4.0.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-platform-4.0.0-B21i7Yfna9yzdQ_-gKpJXuIRK9chJmOVTWVH9IJEkuk.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna/5.5.0/e0845217c4907822403912ad6828d8e0b256208/jna-5.5.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-5.5.0-swj66_5O1AnehBDgpjLRZLISawNfbqz_lo05CMr7TZ4.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.minidev/json-smart/2.3/7396407491352ce4fa30de92efb158adb76b5b/json-smart-2.3.jar to gs://temp-storage-for-perf-tests/loadtests/staging/json-smart-2.3-kD9IyKpMP2QmRAuNMt6J-h3COxFpq94l5OHQaKpncIs.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.minidev/accessors-smart/1.2/c592b500269bfde36096641b01238a8350f8aa31/accessors-smart-1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/accessors-smart-1.2-DHwmXWL8AHEk3DK5EzbpxCcmUdYpvF-hpOTjvHWOsuQ.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.ow2.asm/asm/5.0.4/da08b8cce7bbf903602a25a3a163ae252435795/asm-5.0.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/asm-5.0.4-iWYY7YrmJwJSGni8e-QrfEkaCOaSChX4mj7N7DHpoiA.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.nimbusds/nimbus-jose-jwt/7.9/b608cd5e306d67bb58fe5bd687387aa0671687a6/nimbus-jose-jwt-7.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/nimbus-jose-jwt-7.9-tPWEU-GAqYHrdEoZtNVq-xLxDD3TXnUxzFjubL9b8oY.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.jettison/jettison/1.1/1a01a2a1218fcf9faa2cc2a6ced025bdea687262/jettison-1.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jettison-1.1-N3lAKIsGQ8SHgBN_b2hXiTfh6lyitzgwqCDFCnt-2AE.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.xml.bind/jaxb-impl/2.3.3/3758e8c1664979749e647a9ca8c7ea1cd83c9b1e/jaxb-impl-2.3.3.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jaxb-impl-2.3.3-5ReNDHlIJH91oTxom_NvTV1JEKEh9xKqOyCulDdwadg.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.inject/guice/3.0/9d84f15fe35e2c716a02979fb62f50a29f38aefa/guice-3.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/guice-3.0-GlnQQh_9NVzAtwtC3xwumvdEyKLQyS2jefX8ovB_HSI.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.jamesmurty.utils/java-xmlbuilder/0.4/ac5962e48cdee3a0a6e1f8e00fcb594747ac5aaf/java-xmlbuilder-0.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/java-xmlbuilder-0.4-aB5TxP_Vn6EgaIA7JZ46g9Q_B6R8ES50ihh97heesx8.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.sonatype.sisu.inject/cglib/2.2.1-v20090111/7ce5e983fd0e6c78346f4c9cbfa39d83049dda2/cglib-2.2.1-v20090111.jar to gs://temp-storage-for-perf-tests/loadtests/staging/cglib-2.2.1-v20090111-QuHfsmvsvxpjPyW0fjn8xCK4XnfkwEaNmkT4hfX6C-I.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/asm/asm/3.1/c157def142714c544bdea2e6144645702adf7097/asm-3.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/asm-3.1-Mz_1NpBDl1t-AxuLJyBpN0QYVHOOA4wfR_mNByogQ3o.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.xml.stream/stax-api/1.0-2/d6337b0de8b25e53e81b922352fbea9f9f57ba0b/stax-api-1.0-2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/stax-api-1.0-2-6McOvXb5gslYKoLvgs9s4Up9WKSk3KXLe3_JiMgAibc.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.activation/activation/1.1/e6cb541461c2834bdea3eb920f1884d1eb508b50/activation-1.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/activation-1.1-KIHHnJ1u8BxY5ivuoT6dGsi4uqFvL8GYrW5ndt79zdM.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.squareup.okio/okio/1.6.0/98476622f10715998eacf9240d6b479f12c66143/okio-1.6.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/okio-1.6.0-EUvcH0czimi8vJWr8vXNxyvu7JGBLy_Ne1IcGTeHYmY.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.fusesource.leveldbjni/leveldbjni-all/1.8/707350a2eeb1fa2ed77a32ddb3893ed308e941db/leveldbjni-all-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/leveldbjni-all-1.8-wpchOw5vk5IwWVJ1PzCZpMAucLNlYmb-AYZ-e2wWD_4.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.activation/jakarta.activation/1.2.2/74548703f9851017ce2f556066659438019e7eb5/jakarta.activation-1.2.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jakarta.activation-1.2.2-AhVnc-SunQSNFKVq011kS-6fEFKnkdBy3z3tPGVubho.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.geronimo.specs/geronimo-jcache_1.0_spec/1.0-alpha-1/ef92fbbc3a3a7f45bf021bcb75df2c6e0660dfac/geronimo-jcache_1.0_spec-1.0-alpha-1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/geronimo-jcache_1.0_spec-1.0-alpha-1-AHChLlj0kblXGTkTJSmaYpRTDubDziXlC9yYsLcAlmw.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.stephenc.jcip/jcip-annotations/1.0-1/ef31541dd28ae2cefdd17c7ebf352d93e9058c63/jcip-annotations-1.0-1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jcip-annotations-1.0-1-T8z_g4Kq_FiZYsTtsmL2qlleNPHhHmEFfRxqluj8cyM.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.inject/javax.inject/1/6975da39a7040257bd51d21a231b76c915872d38/javax.inject-1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/javax.inject-1-kcdwRKUMSBY2wy2Rb9ickRinIZU5BFLIEGUID5V95_8.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.microsoft.sqlserver/mssql-jdbc/6.2.1.jre7/2912ca3a5ee674ec79cd6914b9f5d6282d083eb8/mssql-jdbc-6.2.1.jre7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mssql-jdbc-6.2.1.jre7-nPollFCuNHHS5uLD2K78ziNuPa74s3NNIdyTw6W76AY.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/aopalliance/aopalliance/1.0/235ba8b489512805ac13a8f9ea77a1ca5ebe3e8/aopalliance-1.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/aopalliance-1.0-Ct3sZw_tzT8RPFyAkdeDKA0j9146y4QbYanNsHk3agg.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport/3.2.7/315903a129f530422747efc163dd255f0fa2555e/docker-java-transport-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-3.2.7-ynNXlSXhDSyiLCXkA4c9E9_HGeKB7-wbpEzmswFrI-A.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/jline/jline/0.9.94/99a18e9a44834afdebc467294e1138364c207402/jline-0.9.94.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jline-0.9.94-2N8P-xLYfKh2JxzaTVmz_rlBI4gsG-F2O3-vLgoLDLs.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-cli/commons-cli/1.2/2bf96b7aa8b611c177d329452af1dc933e14501c/commons-cli-1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-cli-1.2-582JUZVtNJtWi3zP1PWyUpqMET5nwysCj1L_2jcSWdk.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.squareup.okhttp/okhttp/2.7.5/7a15a7db50f86c4b64aa3367424a60e3a325b8f1/okhttp-2.7.5.jar to gs://temp-storage-for-perf-tests/loadtests/staging/okhttp-2.7.5-iKyf0btR-CvMZkzB65wiXJDcQ4nWYCMbTMc3vr_n0Ko.jar
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 260 files cached, 102 files newly uploaded in 1 seconds
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 28, 2022 1:08:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 639aab2e3e55a42bf7686892253598376c5c95840a06f286f067a7121b318b53> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Y5qrLj5VpCv3aGiSJTWYN2xclYQKBvKG8GenEhsxi1M.pb
    Jan 28, 2022 1:08:31 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 28, 2022 1:08:31 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 28, 2022 1:08:31 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 28, 2022 1:08:31 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 28, 2022 1:08:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-28_05_08_31-7396947567542186327?project=apache-beam-testing
    Jan 28, 2022 1:08:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-28_05_08_31-7396947567542186327
    Jan 28, 2022 1:08:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-28_05_08_31-7396947567542186327
    Jan 28, 2022 1:08:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-28T13:08:34.939Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 28, 2022 1:08:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:08:43.913Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 28, 2022 1:08:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:08:44.699Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 28, 2022 1:08:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:08:44.728Z: Expanding GroupByKey operations into optimizable parts.
    Jan 28, 2022 1:08:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:08:44.755Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 28, 2022 1:08:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:08:44.824Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 28, 2022 1:08:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:08:44.871Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 28, 2022 1:08:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:08:44.913Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 28, 2022 1:08:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:08:45.273Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 28, 2022 1:08:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:08:45.368Z: Starting 5 workers in us-central1-a...
    Jan 28, 2022 1:09:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:09:00.013Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 28, 2022 1:09:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:09:35.499Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 28, 2022 1:09:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:09:59.294Z: Workers have started successfully.
    Jan 28, 2022 1:09:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:09:59.334Z: Workers have started successfully.
    Jan 28, 2022 1:10:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-28T13:10:26.866Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDFJDX0d2SEdWZ3lyThoCamQaAmly/streams/CAYaAmpkGgJpciDW6M3QAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFJDX0d2SEdWZ3lyThoCamQaAmly/streams/CAYaAmpkGgJpciDW6M3QAygC': offset 64580 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDFJDX0d2SEdWZ3lyThoCamQaAmly/streams/CAYaAmpkGgJpciDW6M3QAygC': offset 64580 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 28, 2022 1:10:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:10:28.634Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 28, 2022 1:10:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:10:28.881Z: Cleaning up.
    Jan 28, 2022 1:10:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:10:28.978Z: Stopping worker pool...
    Jan 28, 2022 1:13:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:13:02.770Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 28, 2022 1:13:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T13:13:02.842Z: Worker pool stopped.
    Jan 28, 2022 1:13:12 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-28_05_08_31-7396947567542186327 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cc33d244-7087-4cda-bcfe-cc6656c530e6 and timestamp: 2022-01-28T13:13:12.185000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.326

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 28, 2022 1:13:12 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 1.876 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 54s
165 actionable tasks: 104 executed, 59 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/rebmohoevthnw

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2967

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2967/display/redirect?page=changes>

Changes:

[dhuntsperger] added GitHub example references to Python multilang quickstart

[noreply] Merge pull request #16579 from Revert "Revert "Merge pull request #15863

[noreply] Merge pull request #16606 from [BEAM-13247] [Playground] Embedding


------------------------------------------
[...truncated 370.21 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-27_23_39_09-3178622382860995465
    Jan 28, 2022 7:39:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-28T07:39:15.190Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 28, 2022 7:39:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:39:25.634Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 28, 2022 7:39:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:39:26.464Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 28, 2022 7:39:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:39:26.523Z: Expanding GroupByKey operations into optimizable parts.
    Jan 28, 2022 7:39:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:39:26.555Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 28, 2022 7:39:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:39:26.650Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 28, 2022 7:39:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:39:26.690Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 28, 2022 7:39:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:39:26.731Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 28, 2022 7:39:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:39:27.201Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 28, 2022 7:39:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:39:27.279Z: Starting 5 workers in us-central1-f...
    Jan 28, 2022 7:39:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:39:31.594Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 28, 2022 7:40:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:40:13.181Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 28, 2022 7:40:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:40:38.843Z: Workers have started successfully.
    Jan 28, 2022 7:40:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:40:38.894Z: Workers have started successfully.
    Jan 28, 2022 7:41:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-28T07:41:09.485Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHhJYTY4UldKWGx6eRoCamQaAmly/streams/CAYaAmpkGgJpciCj8Ou5BigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHhJYTY4UldKWGx6eRoCamQaAmly/streams/CAYaAmpkGgJpciCj8Ou5BigC': offset 76228 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHhJYTY4UldKWGx6eRoCamQaAmly/streams/CAYaAmpkGgJpciCj8Ou5BigC': offset 76228 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 28, 2022 7:41:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-28T07:41:09.821Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHhJYTY4UldKWGx6eRoCamQaAmly/streams/CAQaAmpkGgJpciDH9t3QBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHhJYTY4UldKWGx6eRoCamQaAmly/streams/CAQaAmpkGgJpciDH9t3QBygC': offset 94615 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHhJYTY4UldKWGx6eRoCamQaAmly/streams/CAQaAmpkGgJpciDH9t3QBygC': offset 94615 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 28, 2022 7:41:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-28T07:41:10.480Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDHhJYTY4UldKWGx6eRoCamQaAmly/streams/CAUaAmpkGgJpciD-l4mDBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHhJYTY4UldKWGx6eRoCamQaAmly/streams/CAUaAmpkGgJpciD-l4mDBCgC': offset 81708 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDHhJYTY4UldKWGx6eRoCamQaAmly/streams/CAUaAmpkGgJpciD-l4mDBCgC': offset 81708 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 28, 2022 7:41:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:41:12.227Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 28, 2022 7:41:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:41:12.386Z: Cleaning up.
    Jan 28, 2022 7:41:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:41:12.476Z: Stopping worker pool...
    Jan 28, 2022 7:43:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:43:39.381Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 28, 2022 7:43:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T07:43:39.430Z: Worker pool stopped.
    Jan 28, 2022 7:43:45 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-27_23_39_09-3178622382860995465 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5f3ac1eb-e7ab-478b-bcbc-d96a1d89ba9d and timestamp: 2022-01-28T07:43:45.598000000Z:
                     Metric:                    Value:
                 fields_read                 4634020.0
                   read_time                      9.49

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 28, 2022 7:43:45 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 56.798 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 53s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/lansrub5ny7w4

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2966

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2966/display/redirect?page=changes>

Changes:

[marcin.kuthan] Get rid of unnessecary logs for BigQuery streaming writes in

[Heejong Lee] Fix google3 import error

[noreply] [BEAM-12976] Implement Java projection pushdown optimizer. (#16513)


------------------------------------------
[...truncated 354.29 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 28, 2022 1:35:07 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 28, 2022 1:35:10 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 28, 2022 1:35:13 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 28, 2022 1:35:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 28, 2022 1:35:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 28, 2022 1:35:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 28, 2022 1:35:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 28, 2022 1:35:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 28, 2022 1:35:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 28, 2022 1:35:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1772579700]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 28, 2022 1:35:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 28, 2022 1:35:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 28, 2022 1:35:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 28, 2022 1:35:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 28, 2022 1:35:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 28, 2022 1:35:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 28, 2022 1:35:36 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@267368044]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 28, 2022 1:35:37 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 28, 2022 1:35:37 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 28, 2022 1:35:37 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 28, 2022 1:35:37 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 28, 2022 1:35:37 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 28, 2022 1:35:37 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 28, 2022 1:35:38 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 28, 2022 1:35:38 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 28, 2022 1:35:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 28, 2022 1:35:50 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 28, 2022 1:35:50 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-cucPaeHkP12S23urh2ulDTin3hnEhAHMKvLejSrb1QQ.jar
    Jan 28, 2022 1:35:52 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7730826425333832671.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-lRAdnbPgU1W2nxqTiaZ9Kp9RIeifYhYhHUVY_-FT2us.jar
    Jan 28, 2022 1:35:58 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 8 seconds
    Jan 28, 2022 1:35:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 28, 2022 1:35:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 804d5fd60767ec4391f66d609f6bfb85b6d594998a64034961a9ea8981c06fc7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-gE1f1gdn7EOR9m1gn2v7hbbVlJmKZANJYanqiYHAb8c.pb
    Jan 28, 2022 1:36:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 28, 2022 1:36:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 28, 2022 1:36:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 28, 2022 1:36:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 28, 2022 1:36:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-27_17_36_13-7021713981444485243?project=apache-beam-testing
    Jan 28, 2022 1:36:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-27_17_36_13-7021713981444485243
    Jan 28, 2022 1:36:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-27_17_36_13-7021713981444485243
    Jan 28, 2022 1:36:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-28T01:36:16.557Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 28, 2022 1:36:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:36:26.595Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 28, 2022 1:36:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:36:27.456Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 28, 2022 1:36:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:36:27.494Z: Expanding GroupByKey operations into optimizable parts.
    Jan 28, 2022 1:36:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:36:27.523Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 28, 2022 1:36:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:36:27.618Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 28, 2022 1:36:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:36:27.647Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 28, 2022 1:36:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:36:27.683Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 28, 2022 1:36:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:36:28.087Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 28, 2022 1:36:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:36:28.155Z: Starting 5 workers in us-central1-a...
    Jan 28, 2022 1:36:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:36:48.500Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 28, 2022 1:37:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:37:17.510Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 28, 2022 1:37:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:37:42.647Z: Workers have started successfully.
    Jan 28, 2022 1:37:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:37:42.699Z: Workers have started successfully.
    Jan 28, 2022 1:38:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:38:09.988Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 28, 2022 1:38:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:38:10.137Z: Cleaning up.
    Jan 28, 2022 1:38:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:38:10.246Z: Stopping worker pool...
    Jan 28, 2022 1:40:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:40:35.862Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 28, 2022 1:40:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-28T01:40:35.913Z: Worker pool stopped.
    Jan 28, 2022 1:40:41 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-27_17_36_13-7021713981444485243 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d4aa7f19-ff93-4442-a557-b442ff258959 and timestamp: 2022-01-28T01:40:41.557000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     5.449

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 28, 2022 1:40:41 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.139 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.115 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 5 mins 50.125 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 9m 47s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/q64o45o7mfnyi

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2965

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2965/display/redirect?page=changes>

Changes:

[mmack] [adhoc] Test S3Options and AwsOptions for Sdk v2

[noreply] [BEAM-13537] Fix NPE in kafkatopubsub example (#16625)

[noreply] [BEAM-13740] update java_tests.yml to remove setup-go, which is


------------------------------------------
[...truncated 356.03 KB...]
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 27, 2022 7:55:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 27, 2022 7:55:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 27, 2022 7:55:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 27, 2022 7:55:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 27, 2022 7:55:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 27, 2022 7:55:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 27, 2022 7:55:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@190838539]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 27, 2022 7:55:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 27, 2022 7:55:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 27, 2022 7:55:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 27, 2022 7:55:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 27, 2022 7:55:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 27, 2022 7:55:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 27, 2022 7:55:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 27, 2022 7:55:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 27, 2022 7:55:50 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 27, 2022 7:55:56 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 27, 2022 7:55:56 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-YCffysRTmlcOjanlGwq5EAMniHJUzNYmrIOSC1MAHpY.jar
    Jan 27, 2022 7:55:56 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3311984143934375849.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-cxXo6K9KPr7g3d52sjw1FVwx2R_VD7j8WQ_gtXu-RLc.jar
    Jan 27, 2022 7:55:57 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 27, 2022 7:55:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 27, 2022 7:55:57 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 43175cadf1ab99e114ed6559e3b9c022c180fb110b472656d17338bc22fe581a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-QxdcrfGrmeEU7WVZ47nAIsGA-xELRyZW0XM4vCL-WBo.pb
    Jan 27, 2022 7:56:00 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 27, 2022 7:56:00 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 27, 2022 7:56:00 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 27, 2022 7:56:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 27, 2022 7:56:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-27_11_56_01-5827737594213810832?project=apache-beam-testing
    Jan 27, 2022 7:56:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-27_11_56_01-5827737594213810832
    Jan 27, 2022 7:56:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-27_11_56_01-5827737594213810832
    Jan 27, 2022 7:56:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-27T19:56:04.301Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 27, 2022 7:56:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:56:15.088Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 27, 2022 7:56:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:56:16.067Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 27, 2022 7:56:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:56:16.120Z: Expanding GroupByKey operations into optimizable parts.
    Jan 27, 2022 7:56:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:56:16.139Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 27, 2022 7:56:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:56:16.208Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 27, 2022 7:56:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:56:16.231Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 27, 2022 7:56:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:56:16.274Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 27, 2022 7:56:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:56:16.575Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 27, 2022 7:56:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:56:16.646Z: Starting 5 workers in us-central1-a...
    Jan 27, 2022 7:56:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:56:31.529Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 27, 2022 7:57:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:57:08.738Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 27, 2022 7:57:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:57:32.995Z: Workers have started successfully.
    Jan 27, 2022 7:57:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:57:33.025Z: Workers have started successfully.
    Jan 27, 2022 7:58:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-27T19:58:00.779Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGtFQ0lQTWlXMjVrbRoCamQaAmly/streams/CAcaAmpkGgJpciCwrqqOAigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGtFQ0lQTWlXMjVrbRoCamQaAmly/streams/CAcaAmpkGgJpciCwrqqOAigC': offset 83421 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGtFQ0lQTWlXMjVrbRoCamQaAmly/streams/CAcaAmpkGgJpciCwrqqOAigC': offset 83421 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 27, 2022 7:58:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:58:03.313Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 27, 2022 7:58:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:58:03.455Z: Cleaning up.
    Jan 27, 2022 7:58:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T19:58:03.522Z: Stopping worker pool...
    Jan 27, 2022 8:00:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T20:00:35.284Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 27, 2022 8:00:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T20:00:35.332Z: Worker pool stopped.
    Jan 27, 2022 8:00:42 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-27_11_56_01-5827737594213810832 finished with status DONE.


Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 813142e8-1ddf-45c9-aeaf-a5e755c6c7fd and timestamp: 2022-01-27T20:00:42.294000000Z:
                     Metric:                    Value:
                   read_time                     7.662
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 27, 2022 8:00:42 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 5 mins 5.422 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 46s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/5apuakdavlsvy

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2964

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2964/display/redirect>

Changes:


------------------------------------------
[...truncated 373.87 KB...]
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEpSRWFBdldSSTJGRRoCamQaAmly/streams/CAcaAmpkGgJpciCvhau4ASgC': offset 84463 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 27, 2022 1:18:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-27T13:18:56.473Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEpSRWFBdldSSTJGRRoCamQaAmly/streams/CAQaAmpkGgJpciCt4_xsKAI"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEpSRWFBdldSSTJGRRoCamQaAmly/streams/CAQaAmpkGgJpciCt4_xsKAI': offset 66460 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEpSRWFBdldSSTJGRRoCamQaAmly/streams/CAQaAmpkGgJpciCt4_xsKAI': offset 66460 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 27, 2022 1:18:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-27T13:18:56.918Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEpSRWFBdldSSTJGRRoCamQaAmly/streams/CAkaAmpkGgJpciDorOjNBigC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEpSRWFBdldSSTJGRRoCamQaAmly/streams/CAkaAmpkGgJpciDorOjNBigC': offset 91185 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEpSRWFBdldSSTJGRRoCamQaAmly/streams/CAkaAmpkGgJpciDorOjNBigC': offset 91185 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 27, 2022 1:18:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-27T13:18:57.065Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDEpSRWFBdldSSTJGRRoCamQaAmly/streams/CAIaAmpkGgJpciDG4rKPAygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEpSRWFBdldSSTJGRRoCamQaAmly/streams/CAIaAmpkGgJpciDG4rKPAygC': offset 95977 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDEpSRWFBdldSSTJGRRoCamQaAmly/streams/CAIaAmpkGgJpciDG4rKPAygC': offset 95977 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 27, 2022 1:19:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T13:19:01.071Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 27, 2022 1:19:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T13:19:01.205Z: Cleaning up.
    Jan 27, 2022 1:19:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T13:19:01.308Z: Stopping worker pool...
    Jan 27, 2022 1:21:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T13:21:21.435Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 27, 2022 1:21:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T13:21:21.480Z: Worker pool stopped.
    Jan 27, 2022 1:21:26 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-27_05_16_54-11918613000564294241 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0a66db2b-0215-4855-8522-a43c3249f9a4 and timestamp: 2022-01-27T13:21:26.989000000Z:
                     Metric:                    Value:
                   read_time                    12.037
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 27, 2022 1:21:27 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 51.99 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 23s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/u4323id4kerom

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2963

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2963/display/redirect?page=changes>

Changes:

[Heejong Lee] Update readme for XVR tests


------------------------------------------
[...truncated 364.49 KB...]
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-26_22_52_08-1295845141677491829
    Jan 27, 2022 6:52:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-27T06:52:11.561Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 27, 2022 6:52:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:52:19.219Z: Worker configuration: e2-standard-2 in us-central1-b.
    Jan 27, 2022 6:52:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:52:19.974Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 27, 2022 6:52:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:52:20.013Z: Expanding GroupByKey operations into optimizable parts.
    Jan 27, 2022 6:52:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:52:20.039Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 27, 2022 6:52:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:52:20.110Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 27, 2022 6:52:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:52:20.137Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 27, 2022 6:52:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:52:20.169Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 27, 2022 6:52:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:52:20.545Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 27, 2022 6:52:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:52:20.623Z: Starting 5 workers in us-central1-b...
    Jan 27, 2022 6:52:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:52:33.758Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 27, 2022 6:53:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:53:04.460Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 27, 2022 6:53:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:53:30.388Z: Workers have started successfully.
    Jan 27, 2022 6:53:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:53:30.435Z: Workers have started successfully.
    Jan 27, 2022 6:53:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-27T06:53:58.293Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGF3T1NDcURFNDBVXxoCamQaAmly/streams/CAEaAmpkGgJpciDF_sadBCgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGF3T1NDcURFNDBVXxoCamQaAmly/streams/CAEaAmpkGgJpciDF_sadBCgC': offset 121304 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGF3T1NDcURFNDBVXxoCamQaAmly/streams/CAEaAmpkGgJpciDF_sadBCgC': offset 121304 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 27, 2022 6:54:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-27T06:53:59.368Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGF3T1NDcURFNDBVXxoCamQaAmly/streams/CAcaAmpkGgJpciCRgsOPBygC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGF3T1NDcURFNDBVXxoCamQaAmly/streams/CAcaAmpkGgJpciCRgsOPBygC': offset 90903 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGF3T1NDcURFNDBVXxoCamQaAmly/streams/CAcaAmpkGgJpciCRgsOPBygC': offset 90903 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 27, 2022 6:54:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-27T06:53:59.373Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDGF3T1NDcURFNDBVXxoCamQaAmly/streams/CAQaAmpkGgJpciCf7ZiQBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGF3T1NDcURFNDBVXxoCamQaAmly/streams/CAQaAmpkGgJpciCf7ZiQBSgC': offset 85921 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDGF3T1NDcURFNDBVXxoCamQaAmly/streams/CAQaAmpkGgJpciCf7ZiQBSgC': offset 85921 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 27, 2022 6:54:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:54:01.506Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 27, 2022 6:54:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:54:01.645Z: Cleaning up.
    Jan 27, 2022 6:54:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:54:01.710Z: Stopping worker pool...
    Jan 27, 2022 6:56:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:56:31.850Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 27, 2022 6:56:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T06:56:31.881Z: Worker pool stopped.
    Jan 27, 2022 6:56:38 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-26_22_52_08-1295845141677491829 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): fee1d27c-80af-4af9-897d-ad5f1ba82653 and timestamp: 2022-01-27T06:56:38.874000000Z:
                     Metric:                    Value:
                   read_time                    10.201
                 fields_read                 4632272.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 27, 2022 6:56:38 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 50.033 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/v5whqcscuhvsc

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2962

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2962/display/redirect?page=changes>

Changes:

[Heejong Lee] [BEAM-13093] Enable JavaUsingPython CrossLanguageValidateRunner test for

[noreply] Merge pull request #16534 from [BEAM-13671][Playground] Add backend

[noreply] [BEAM-13271] Bump errorprone to 2.10.0 (#16231)

[noreply] [BEAM-13595] Don't load main session when cloudpickle is used. (#16589)


------------------------------------------
[...truncated 364.52 KB...]
    Jan 27, 2022 1:21:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 27, 2022 1:21:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 27, 2022 1:21:49 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 27, 2022 1:21:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 27, 2022 1:21:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 27, 2022 1:21:49 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 27, 2022 1:21:49 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1534365740]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 27, 2022 1:21:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 27, 2022 1:21:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 27, 2022 1:21:49 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 27, 2022 1:21:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 27, 2022 1:21:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 27, 2022 1:21:49 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 27, 2022 1:21:50 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 27, 2022 1:21:50 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 27, 2022 1:21:50 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 27, 2022 1:21:56 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 27, 2022 1:21:56 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-YCffysRTmlcOjanlGwq5EAMniHJUzNYmrIOSC1MAHpY.jar
    Jan 27, 2022 1:21:56 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.37.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.37.0-SNAPSHOT-tests-i_OZl49ESmTJapv-sP-C4ncpbNStBrioKNjMgWe4tB4.jar
    Jan 27, 2022 1:21:56 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-PoUv-KFKMEdJk8MJ5PNjzCnYxTB5BnBCs_7cbLXF5ho.jar
    Jan 27, 2022 1:21:56 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.37.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.37.0-SNAPSHOT-o1aADMlvGSxjVur0clOXFqwJnMF45bS3R6zGNis09pU.jar
    Jan 27, 2022 1:21:56 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4882150296083348894.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-pcT82lIJilJYonJ1XalnUEGS0o5MUmf46Bz-CeLCLh0.jar
    Jan 27, 2022 1:21:56 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.37.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.37.0-SNAPSHOT-Tl-rqROyfFjUbm1pi2LQ2tdsZFwawXrRlgBoBaQ0-f4.jar
    Jan 27, 2022 1:21:56 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/hadoop-common/build/libs/beam-sdks-java-io-hadoop-common-2.37.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-hadoop-common-2.37.0-SNAPSHOT--Z7FaTJuLSyiPWeUL9VxAgqy53Gn2R4elBkiZEpHHyU.jar
    Jan 27, 2022 1:21:56 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/parquet/build/libs/beam-sdks-java-io-parquet-2.37.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-parquet-2.37.0-SNAPSHOT--Y3Ki6BsSIhDqZLGsTPV4L2LNYlAKfQgFkOYYPov-fs.jar
    Jan 27, 2022 1:21:57 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 355 files cached, 7 files newly uploaded in 0 seconds
    Jan 27, 2022 1:21:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 27, 2022 1:21:57 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142351 bytes, hash 7a08db38a142ed269db672dcdb9322e5981ff8092c685fa84a6ccf9bc9b82ff1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-egjbOKFC7SadtnLc25Mi5Zgf-AksaF-oSmzPm8m4L_E.pb
    Jan 27, 2022 1:22:00 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 27, 2022 1:22:00 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 27, 2022 1:22:00 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 27, 2022 1:22:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 27, 2022 1:22:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-26_17_22_01-1413210348920769804?project=apache-beam-testing
    Jan 27, 2022 1:22:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-26_17_22_01-1413210348920769804
    Jan 27, 2022 1:22:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-26_17_22_01-1413210348920769804
    Jan 27, 2022 1:22:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-27T01:22:04.380Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 27, 2022 1:22:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:22:12.319Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 27, 2022 1:22:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:22:13.121Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 27, 2022 1:22:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:22:13.151Z: Expanding GroupByKey operations into optimizable parts.
    Jan 27, 2022 1:22:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:22:13.200Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 27, 2022 1:22:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:22:13.262Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 27, 2022 1:22:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:22:13.291Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 27, 2022 1:22:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:22:13.313Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 27, 2022 1:22:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:22:13.646Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 27, 2022 1:22:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:22:13.709Z: Starting 5 workers in us-central1-f...
    Jan 27, 2022 1:22:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:22:27.081Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 27, 2022 1:23:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:23:06.218Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 27, 2022 1:23:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:23:30.271Z: Workers have started successfully.
    Jan 27, 2022 1:23:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:23:30.304Z: Workers have started successfully.
    Jan 27, 2022 1:24:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2022-01-27T01:24:02.344Z: java.io.IOException: Failed to advance reader of source: name: "projects/apache-beam-testing/locations/us/sessions/CAISDDM2OTVueFEtZXpkVRoCamQaAmly/streams/CAQaAmpkGgJpciDkxdPKBSgC"

    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:625)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    	at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDM2OTVueFEtZXpkVRoCamQaAmly/streams/CAQaAmpkGgJpciDkxdPKBSgC': offset 123712 has not been allocated yet
    	at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:57)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
    	at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
    	at com.google.api.gax.grpc.ExceptionResponseObserver.onErrorImpl(ExceptionResponseObserver.java:82)
    	at com.google.api.gax.rpc.StateCheckingResponseObserver.onError(StateCheckingResponseObserver.java:86)
    	at com.google.api.gax.grpc.GrpcDirectStreamController$ResponseObserverAdapter.onClose(GrpcDirectStreamController.java:149)
    	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
    	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
    	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
    	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
    	... 3 more
    	Suppressed: java.lang.RuntimeException: Asynchronous task failed
    		at com.google.api.gax.rpc.ServerStreamIterator.hasNext(ServerStreamIterator.java:105)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.readNextRecord(BigQueryStorageStreamSource.java:211)
    		at org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageStreamSource$BigQueryStorageStreamReader.advance(BigQueryStorageStreamSource.java:206)
    		at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:622)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:419)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:205)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:163)
    		at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
    		at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    		at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    		... 3 more
    Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: there was an error operating on 'projects/apache-beam-testing/locations/us/sessions/CAISDDM2OTVueFEtZXpkVRoCamQaAmly/streams/CAQaAmpkGgJpciDkxdPKBSgC': offset 123712 has not been allocated yet
    	at io.grpc.Status.asRuntimeException(Status.java:535)
    	... 10 more

    Jan 27, 2022 1:24:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:24:03.197Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 27, 2022 1:24:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:24:03.333Z: Cleaning up.
    Jan 27, 2022 1:24:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:24:03.412Z: Stopping worker pool...
    Jan 27, 2022 1:26:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:26:32.803Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 27, 2022 1:26:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-27T01:26:32.873Z: Worker pool stopped.
    Jan 27, 2022 1:26:39 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-26_17_22_01-1413210348920769804 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f02629a9-727d-4fbb-8b01-3441d16b9a32 and timestamp: 2022-01-27T01:26:39.259000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.491

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 27, 2022 1:26:39 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 5 mins 2.66 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 22s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ek2lkwg2v7lty

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2961

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2961/display/redirect?page=changes>

Changes:

[mmack] [BEAM-13746] Fix deserialization of SSECustomerKey for AWS Sdk v2

[noreply] [BEAM-7928] Allow users to specify worker disk type for Dataflow runner


------------------------------------------
[...truncated 382.55 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 6b0839bef166a31946c62f8df471341f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 6'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 6'
Successfully started process 'Gradle Test Executor 6'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 26, 2022 7:37:11 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 26, 2022 7:37:12 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 26, 2022 7:37:13 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 26, 2022 7:37:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 7:37:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 7:37:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 26, 2022 7:37:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 7:37:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 7:37:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 26, 2022 7:37:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@741767653]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 26, 2022 7:37:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 26, 2022 7:37:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 7:37:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 26, 2022 7:37:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 26, 2022 7:37:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 7:37:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 26, 2022 7:37:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@190838539]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 26, 2022 7:37:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 7:37:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 7:37:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 26, 2022 7:37:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 7:37:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 7:37:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 26, 2022 7:37:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 26, 2022 7:37:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 26, 2022 7:37:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 26, 2022 7:37:23 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 26, 2022 7:37:23 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 26, 2022 7:37:23 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5571828868433628792.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-t9nzQMT2okPvdRAJ4u7z3JhaN8eKEdHAtATC8bkMkW4.jar
    Jan 26, 2022 7:37:23 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 26, 2022 7:37:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 26, 2022 7:37:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash bdd620d1a69ce73032bc966d831a9b8cedfef7f9a36aba6b3f840186d3c24517> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-vdYg0aac5zAyvJZtgxqbjO3-9_mjarprP4QBhtPCRRc.pb
    Jan 26, 2022 7:37:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 26, 2022 7:37:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 26, 2022 7:37:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 26, 2022 7:37:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 26, 2022 7:37:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-26_11_37_27-17141570671672586915?project=apache-beam-testing
    Jan 26, 2022 7:37:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-26_11_37_27-17141570671672586915
    Jan 26, 2022 7:37:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-26_11_37_27-17141570671672586915
    Jan 26, 2022 7:37:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-26T19:37:31.340Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 26, 2022 7:37:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:37:42.418Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 26, 2022 7:37:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:37:43.199Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 26, 2022 7:37:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:37:43.242Z: Expanding GroupByKey operations into optimizable parts.
    Jan 26, 2022 7:37:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:37:43.286Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 26, 2022 7:37:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:37:43.360Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 26, 2022 7:37:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:37:43.394Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 26, 2022 7:37:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:37:43.428Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 26, 2022 7:37:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:37:43.817Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 26, 2022 7:37:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:37:43.908Z: Starting 5 workers in us-central1-a...
    Jan 26, 2022 7:38:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:38:03.327Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 26, 2022 7:38:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:38:29.940Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 26, 2022 7:38:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:38:56.373Z: Workers have started successfully.
    Jan 26, 2022 7:38:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:38:56.408Z: Workers have started successfully.
    Jan 26, 2022 7:39:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:39:27.853Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 26, 2022 7:39:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:39:28.030Z: Cleaning up.
    Jan 26, 2022 7:39:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:39:28.113Z: Stopping worker pool...
    Jan 26, 2022 7:41:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:41:56.398Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 26, 2022 7:41:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T19:41:56.438Z: Worker pool stopped.
    Jan 26, 2022 7:42:04 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-26_11_37_27-17141570671672586915 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 97dd67af-a727-4b3e-b7a3-e716f763fbf8 and timestamp: 2022-01-26T19:42:04.275000000Z:
                     Metric:                    Value:
                   read_time                    10.149
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 26, 2022 7:42:04 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 6 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 56.8 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 20m 50s
165 actionable tasks: 125 executed, 38 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/vf5holcbkjspi

Stopped 5 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2960

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2960/display/redirect?page=changes>

Changes:

[mmack] [BEAM-8807] Add integration test for SnsIO.write (Sdk v1 & v2)


------------------------------------------
[...truncated 346.43 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 972'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 972'
Successfully started process 'Gradle Test Executor 972'

Gradle Test Executor 972 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 26, 2022 12:44:44 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 26, 2022 12:44:45 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 26, 2022 12:44:46 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 26, 2022 12:44:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 12:44:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 12:44:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 26, 2022 12:44:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 12:44:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 12:44:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 26, 2022 12:44:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@535517293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 26, 2022 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 26, 2022 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 12:44:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 26, 2022 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 26, 2022 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 12:44:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 26, 2022 12:44:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1899907523]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 26, 2022 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 12:44:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 26, 2022 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 12:44:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 26, 2022 12:44:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 26, 2022 12:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 26, 2022 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 26, 2022 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 26, 2022 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 26, 2022 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3477164420160401801.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SknkTlVc_O73zJUt8JuzOkCSslVZbn1PRpMKVv56oV4.jar
    Jan 26, 2022 12:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 26, 2022 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 26, 2022 12:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash 634e4630ad61e70ca988c89e8a1370aee2f375ef922b24c1faff3fa3b7c1ffd8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Y05GMK1h5wypiMieihNwruLzde-SKyTB-v8_o7fB_9g.pb
    Jan 26, 2022 12:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 26, 2022 12:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 26, 2022 12:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 26, 2022 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 26, 2022 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-26_04_45_03-17415239798977679377?project=apache-beam-testing
    Jan 26, 2022 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-26_04_45_03-17415239798977679377
    Jan 26, 2022 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-26_04_45_03-17415239798977679377
    Jan 26, 2022 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-26T12:45:06.524Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 26, 2022 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:45:17.990Z: Worker configuration: e2-standard-2 in us-central1-c.
    Jan 26, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:45:19.025Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 26, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:45:19.073Z: Expanding GroupByKey operations into optimizable parts.
    Jan 26, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:45:19.103Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 26, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:45:19.160Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 26, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:45:19.220Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 26, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:45:19.253Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 26, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:45:19.564Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 26, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:45:19.657Z: Starting 5 workers in us-central1-c...
    Jan 26, 2022 12:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:45:41.660Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 26, 2022 12:46:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:46:08.791Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 26, 2022 12:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:46:33.673Z: Workers have started successfully.
    Jan 26, 2022 12:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:46:33.697Z: Workers have started successfully.
    Jan 26, 2022 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:47:03.439Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 26, 2022 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:47:03.608Z: Cleaning up.
    Jan 26, 2022 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:47:03.688Z: Stopping worker pool...
    Jan 26, 2022 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:49:31.239Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 26, 2022 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T12:49:31.276Z: Worker pool stopped.
    Jan 26, 2022 12:49:38 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-26_04_45_03-17415239798977679377 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 355178c7-e602-4c0a-97e8-f54c6361d9e9 and timestamp: 2022-01-26T12:49:38.177000000Z:
                     Metric:                    Value:
                   read_time                      6.63
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 26, 2022 12:49:38 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 972 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.001 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.002 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 59.087 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/g5hbts7qgvtiu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2959

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2959/display/redirect?page=changes>

Changes:

[artur.khanin] Privacy policy update regarding Apache Beam Playground

[Daniel Oliveira] [BEAM-13321] Fix exception with BigQuery StreamWriter TraceID.


------------------------------------------
[...truncated 347.19 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 6b0839bef166a31946c62f8df471341f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 26, 2022 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 26, 2022 6:45:01 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 26, 2022 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 26, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 26, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 26, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 26, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 26, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 26, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 26, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 26, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@715099463]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 26, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 26, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 26, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 26, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 26, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 26, 2022 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 26, 2022 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 26, 2022 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3393099656890875474.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-sybRYaWs3N55V0QtnSqGaawMF3iyWC2uohO813NJGrA.jar
    Jan 26, 2022 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 26, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 26, 2022 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash 51510cbabd8a3dbe25dfdf695dbf48e36550cad10b4e2adb4b6966dc456e55a3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-UVEMur2KPb4l399pXb9I42VQytELTirbS2lm3EVuVaM.pb
    Jan 26, 2022 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 26, 2022 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 26, 2022 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 26, 2022 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 26, 2022 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-25_22_45_18-9613074089165877572?project=apache-beam-testing
    Jan 26, 2022 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-25_22_45_18-9613074089165877572
    Jan 26, 2022 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-25_22_45_18-9613074089165877572
    Jan 26, 2022 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-26T06:45:21.530Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 26, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:45:30.173Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 26, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:45:30.964Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 26, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:45:31.033Z: Expanding GroupByKey operations into optimizable parts.
    Jan 26, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:45:31.064Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 26, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:45:31.125Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 26, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:45:31.153Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 26, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:45:31.185Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 26, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:45:31.550Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 26, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:45:31.629Z: Starting 5 workers in us-central1-a...
    Jan 26, 2022 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:45:50.375Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 26, 2022 6:46:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:46:13.228Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 26, 2022 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:46:39.693Z: Workers have started successfully.
    Jan 26, 2022 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:46:39.720Z: Workers have started successfully.
    Jan 26, 2022 6:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:47:10.504Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 26, 2022 6:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:47:10.649Z: Cleaning up.
    Jan 26, 2022 6:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:47:10.718Z: Stopping worker pool...
    Jan 26, 2022 6:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:49:42.015Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 26, 2022 6:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T06:49:42.043Z: Worker pool stopped.
    Jan 26, 2022 6:49:48 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-25_22_45_18-9613074089165877572 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 99c3d4a3-ff3f-4e67-aa34-3d53cd98cfb2 and timestamp: 2022-01-26T06:49:48.828000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.217

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 26, 2022 6:49:49 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 53.334 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/g6qgm4uvgmy32

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2958

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2958/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13310] remove call to get offset consumer config, which was rep…


------------------------------------------
[...truncated 350.44 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is f8966d54091b6d6ef05ab5305475c155
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 26, 2022 12:54:10 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 26, 2022 12:54:11 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 26, 2022 12:54:12 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 26, 2022 12:54:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 12:54:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 12:54:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 26, 2022 12:54:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 12:54:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 12:54:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 26, 2022 12:54:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 26, 2022 12:54:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 26, 2022 12:54:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 12:54:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 26, 2022 12:54:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 26, 2022 12:54:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 12:54:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 26, 2022 12:54:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@715099463]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 26, 2022 12:54:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 12:54:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 12:54:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 26, 2022 12:54:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 26, 2022 12:54:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 26, 2022 12:54:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 26, 2022 12:54:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 26, 2022 12:54:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 26, 2022 12:54:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 26, 2022 12:54:23 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 26, 2022 12:54:23 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 26, 2022 12:54:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5728573169280954217.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-H6BlvlvnLSBFV9_dZ2tyeUTObbLBwhMtFQmnuLKa5nQ.jar
    Jan 26, 2022 12:54:24 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 26, 2022 12:54:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 26, 2022 12:54:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142348 bytes, hash 7a4d6a31c123279c20c65fd37cb33f9a0e4cd6b38fdd0238b29f77f1e4e28de2> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ek1qMcEjJ5wgxl_TfLM_mg5M1rOP3QI4sp938eTijeI.pb
    Jan 26, 2022 12:54:27 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 26, 2022 12:54:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 26, 2022 12:54:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 26, 2022 12:54:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 26, 2022 12:54:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-25_16_54_28-13696607032408835920?project=apache-beam-testing
    Jan 26, 2022 12:54:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-25_16_54_28-13696607032408835920
    Jan 26, 2022 12:54:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-25_16_54_28-13696607032408835920
    Jan 26, 2022 12:54:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-26T00:54:31.725Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 26, 2022 12:54:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:54:40.940Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 26, 2022 12:54:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:54:41.800Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 26, 2022 12:54:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:54:41.831Z: Expanding GroupByKey operations into optimizable parts.
    Jan 26, 2022 12:54:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:54:41.859Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 26, 2022 12:54:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:54:41.940Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 26, 2022 12:54:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:54:41.980Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 26, 2022 12:54:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:54:42.011Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 26, 2022 12:54:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:54:42.393Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 26, 2022 12:54:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:54:42.464Z: Starting 5 workers in us-central1-f...
    Jan 26, 2022 12:54:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:54:52.798Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 26, 2022 12:55:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:55:33.724Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 26, 2022 12:55:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:55:57.597Z: Workers have started successfully.
    Jan 26, 2022 12:55:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:55:57.645Z: Workers have started successfully.
    Jan 26, 2022 12:56:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:56:31.565Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 26, 2022 12:56:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:56:32.130Z: Cleaning up.
    Jan 26, 2022 12:56:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:56:32.209Z: Stopping worker pool...
    Jan 26, 2022 12:58:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:58:58.349Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 26, 2022 12:59:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-26T00:58:58.412Z: Worker pool stopped.
    Jan 26, 2022 12:59:05 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-25_16_54_28-13696607032408835920 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9ff16b66-8a68-4937-ab01-ec48ce63a696 and timestamp: 2022-01-26T00:59:05.761000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                       9.3

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 26, 2022 12:59:05 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 58.871 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 45s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/pa5zlwwbjz45a

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2957

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2957/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13736] Make lifting cache exact. (#16603)

[noreply] Merge pull request #16565 from [BEAM-13692][Playground]  Implement

[noreply] Merge pull request #16502 from [BEAM-13650][Playground] Add link for


------------------------------------------
[...truncated 345.99 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 17'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 17'
Successfully started process 'Gradle Test Executor 17'

Gradle Test Executor 17 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 25, 2022 6:44:46 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 25, 2022 6:44:47 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 25, 2022 6:44:48 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 25, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 25, 2022 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 25, 2022 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 25, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 25, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 25, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 25, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 25, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1460265227]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 25, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 25, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 25, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 25, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 25, 2022 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 25, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 25, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 25, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8563950014313017811.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-YUKo6oodEagPE--HLwvH9LTBPLIa6oeDaQeMq_79Xcw.jar
    Jan 25, 2022 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 1 seconds
    Jan 25, 2022 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 25, 2022 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash 44e19bab0d163d98efafd9f3a7f69ea9f23ada7595fade38fb14bcdd802c33b5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ROGbqw0WPZjvr9nzp_aeqfI62nWV-t44-xS83YAsM7U.pb
    Jan 25, 2022 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 25, 2022 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 25, 2022 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 25, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 25, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-25_10_45_10-13726484910815960642?project=apache-beam-testing
    Jan 25, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-25_10_45_10-13726484910815960642
    Jan 25, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-25_10_45_10-13726484910815960642
    Jan 25, 2022 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-25T18:45:13.454Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 25, 2022 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:45:24.352Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 25, 2022 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:45:25.049Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 25, 2022 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:45:25.079Z: Expanding GroupByKey operations into optimizable parts.
    Jan 25, 2022 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:45:25.108Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 25, 2022 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:45:25.173Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 25, 2022 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:45:25.206Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 25, 2022 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:45:25.241Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 25, 2022 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:45:25.614Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 25, 2022 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:45:25.682Z: Starting 5 workers in us-central1-a...
    Jan 25, 2022 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:45:32.068Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 25, 2022 6:46:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:46:14.283Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 25, 2022 6:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:46:39.830Z: Workers have started successfully.
    Jan 25, 2022 6:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:46:39.918Z: Workers have started successfully.
    Jan 25, 2022 6:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:47:11.007Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 25, 2022 6:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:47:11.145Z: Cleaning up.
    Jan 25, 2022 6:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:47:11.220Z: Stopping worker pool...
    Jan 25, 2022 6:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:49:43.359Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 25, 2022 6:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T18:49:43.409Z: Worker pool stopped.
    Jan 25, 2022 6:49:52 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-25_10_45_10-13726484910815960642 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e9c99d40-eb0a-4232-b09a-6724d1269235 and timestamp: 2022-01-25T18:49:52.841000000Z:
                     Metric:                    Value:
                   read_time                     7.011
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 25, 2022 6:49:52 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 17 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.012 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.008 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 5 mins 13.068 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ed7e5ksa5ghzu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2956

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2956/display/redirect?page=changes>

Changes:

[mmack] [BEAM-13510] Don't retry on invalid SQS receipt handles.


------------------------------------------
[...truncated 351.71 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 7f3d89ae570de56a95c9e92bf7b1741b
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 25, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 25, 2022 12:45:17 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 25, 2022 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 25, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 12:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 25, 2022 12:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 12:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 12:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 25, 2022 12:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 25, 2022 12:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 25, 2022 12:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 12:45:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 25, 2022 12:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 25, 2022 12:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 12:45:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 25, 2022 12:45:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@715099463]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 25, 2022 12:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 12:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 12:45:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 25, 2022 12:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 12:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 12:45:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 25, 2022 12:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 25, 2022 12:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 25, 2022 12:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 25, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 25, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 25, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-XxYYG-3WFdIlVK18EOKpg4wgZChyWXD9fdXQtw08jV0.jar
    Jan 25, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5679733042359538992.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-gMy_kWAWwRFsdCjaoe0nLNZxvKIRKX_Wh9-lWF0RylM.jar
    Jan 25, 2022 12:45:29 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 360 files cached, 2 files newly uploaded in 0 seconds
    Jan 25, 2022 12:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 25, 2022 12:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash 91705a5759d2074be2c4dc0b874b3c2a74025c0b0b24a07b6d457a004f70681e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-kXBaV1nSB0vixNwLh0s8KnQCXAsLJKB7bUV6AE9waB4.pb
    Jan 25, 2022 12:45:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 25, 2022 12:45:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 25, 2022 12:45:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 25, 2022 12:45:33 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 25, 2022 12:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-25_04_45_33-14725699014307031004?project=apache-beam-testing
    Jan 25, 2022 12:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-25_04_45_33-14725699014307031004
    Jan 25, 2022 12:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-25_04_45_33-14725699014307031004
    Jan 25, 2022 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-25T12:45:37.086Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 25, 2022 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:45:44.960Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 25, 2022 12:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:45:45.756Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 25, 2022 12:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:45:45.799Z: Expanding GroupByKey operations into optimizable parts.
    Jan 25, 2022 12:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:45:45.830Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 25, 2022 12:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:45:45.908Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 25, 2022 12:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:45:45.937Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 25, 2022 12:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:45:45.973Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 25, 2022 12:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:45:46.358Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 25, 2022 12:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:45:46.417Z: Starting 5 workers in us-central1-a...
    Jan 25, 2022 12:46:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:46:16.284Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 25, 2022 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:46:31.968Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 25, 2022 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:46:56.329Z: Workers have started successfully.
    Jan 25, 2022 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:46:56.359Z: Workers have started successfully.
    Jan 25, 2022 12:47:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:47:25.442Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 25, 2022 12:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:47:25.610Z: Cleaning up.
    Jan 25, 2022 12:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:47:25.698Z: Stopping worker pool...
    Jan 25, 2022 12:49:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:49:52.572Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 25, 2022 12:49:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T12:49:52.617Z: Worker pool stopped.
    Jan 25, 2022 12:49:59 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-25_04_45_33-14725699014307031004 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ffdd4484-f730-4307-8d38-9f6751131eca and timestamp: 2022-01-25T12:49:59.109000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.541

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 25, 2022 12:49:59 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 46.266 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 39s
165 actionable tasks: 104 executed, 59 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/wpj53qzzhgsbe

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2955

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2955/display/redirect?page=changes>

Changes:

[noreply] Improved multi-language pipelines section of the programming guide


------------------------------------------
[...truncated 345.93 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 7f3d89ae570de56a95c9e92bf7b1741b
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 19'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 19'
Successfully started process 'Gradle Test Executor 19'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 25, 2022 6:44:39 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 25, 2022 6:44:40 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 25, 2022 6:44:41 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 25, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 6:44:46 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 25, 2022 6:44:46 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 6:44:46 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 6:44:46 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 25, 2022 6:44:47 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 25, 2022 6:44:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 25, 2022 6:44:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 6:44:47 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 25, 2022 6:44:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 25, 2022 6:44:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 6:44:47 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 25, 2022 6:44:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@715099463]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 25, 2022 6:44:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 6:44:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 6:44:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 25, 2022 6:44:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 6:44:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 6:44:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 25, 2022 6:44:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 25, 2022 6:44:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 25, 2022 6:44:49 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 25, 2022 6:44:53 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 25, 2022 6:44:53 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 25, 2022 6:44:53 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2977608115037926797.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-wBiuGReYvtn1YuEf8ErsrE_VfcBPG1wyrfwza08h2LM.jar
    Jan 25, 2022 6:44:55 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 2 seconds
    Jan 25, 2022 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 25, 2022 6:44:56 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash 5646651e25ccab3cce02fde7f7dad6624a7fe6ff616a353f2ca2b79af4e06bf0> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-VkZlHiXMqzzOAv3n99rWYkp_5v9hajU_LKK3mvTga_A.pb
    Jan 25, 2022 6:44:59 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 25, 2022 6:44:59 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 25, 2022 6:44:59 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 25, 2022 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 25, 2022 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-24_22_44_59-1900459412905989233?project=apache-beam-testing
    Jan 25, 2022 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-24_22_44_59-1900459412905989233
    Jan 25, 2022 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-24_22_44_59-1900459412905989233
    Jan 25, 2022 6:45:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-25T06:45:03.246Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 25, 2022 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:45:13.420Z: Worker configuration: e2-standard-2 in us-central1-b.
    Jan 25, 2022 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:45:14.130Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 25, 2022 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:45:14.176Z: Expanding GroupByKey operations into optimizable parts.
    Jan 25, 2022 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:45:14.224Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 25, 2022 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:45:14.327Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 25, 2022 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:45:14.367Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 25, 2022 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:45:14.401Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 25, 2022 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:45:14.726Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 25, 2022 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:45:14.807Z: Starting 5 workers in us-central1-b...
    Jan 25, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:45:30.807Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 25, 2022 6:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:46:01.066Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 25, 2022 6:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:46:26.819Z: Workers have started successfully.
    Jan 25, 2022 6:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:46:26.850Z: Workers have started successfully.
    Jan 25, 2022 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:47:00.574Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 25, 2022 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:47:00.731Z: Cleaning up.
    Jan 25, 2022 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:47:00.810Z: Stopping worker pool...
    Jan 25, 2022 6:49:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:49:30.535Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 25, 2022 6:49:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T06:49:30.575Z: Worker pool stopped.
    Jan 25, 2022 6:49:39 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-24_22_44_59-1900459412905989233 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6bc8844a-e087-4c37-8d44-da4cfc123a35 and timestamp: 2022-01-25T06:49:39.572000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.543

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 25, 2022 6:49:39 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 19 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.008 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.008 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 4.108 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 19s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/4wtigidpn5l46

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2954

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2954/display/redirect?page=changes>

Changes:

[Heejong Lee] [BEAM-13716] Clear before creating a new virtual environment in

[noreply] Merge pull request #16515 from [BEAM-13636] [Playground] Checking the

[ningkang0957] [BEAM-13275] Removed the explicit selenium dependency from setup

[noreply] [BEAM-10206] Deprecate unused shallow cloning functions (#16600)

[noreply] Bump Dataflow container versions (#16602)


------------------------------------------
[...truncated 359.16 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 25, 2022 12:49:49 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 25, 2022 12:49:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 25, 2022 12:49:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 25, 2022 12:50:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 12:50:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 12:50:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 25, 2022 12:50:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 12:50:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 12:50:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 25, 2022 12:50:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1106513402]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 25, 2022 12:50:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 25, 2022 12:50:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 12:50:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 25, 2022 12:50:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 25, 2022 12:50:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 12:50:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 25, 2022 12:50:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1407280813]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 25, 2022 12:50:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 12:50:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 12:50:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 25, 2022 12:50:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 25, 2022 12:50:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 25, 2022 12:50:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 25, 2022 12:50:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 25, 2022 12:50:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 25, 2022 12:50:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 25, 2022 12:50:35 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 25, 2022 12:50:35 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 25, 2022 12:50:37 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6442635024460012465.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-MuCqTZG05IbPp5ehaVf5aIqV54IyxMwxTmZK_vVz-EY.jar
    Jan 25, 2022 12:50:43 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 7 seconds
    Jan 25, 2022 12:50:43 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 25, 2022 12:50:43 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash 4d9b3e2e4b3cf1c3b259799268e3bd985f4a0a94d7d8afb3df5732c093d527d5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-TZs-Lks88cOyWXmSaOO9mF9KCpTX2K-z31cywJPVJ9U.pb
    Jan 25, 2022 12:50:52 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 25, 2022 12:50:53 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 25, 2022 12:50:53 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 25, 2022 12:50:53 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 25, 2022 12:50:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-24_16_50_53-16540613239875161240?project=apache-beam-testing
    Jan 25, 2022 12:50:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-24_16_50_53-16540613239875161240
    Jan 25, 2022 12:50:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-24_16_50_53-16540613239875161240
    Jan 25, 2022 12:51:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-25T00:50:57.203Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 25, 2022 12:51:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:51:06.027Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 25, 2022 12:51:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:51:06.772Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 25, 2022 12:51:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:51:06.808Z: Expanding GroupByKey operations into optimizable parts.
    Jan 25, 2022 12:51:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:51:06.826Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 25, 2022 12:51:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:51:06.887Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 25, 2022 12:51:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:51:06.921Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 25, 2022 12:51:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:51:06.955Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 25, 2022 12:51:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:51:07.360Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 25, 2022 12:51:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:51:07.436Z: Starting 5 workers in us-central1-a...
    Jan 25, 2022 12:51:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:51:32.678Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 25, 2022 12:52:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:52:00.649Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 25, 2022 12:52:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:52:25.708Z: Workers have started successfully.
    Jan 25, 2022 12:52:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:52:25.759Z: Workers have started successfully.
    Jan 25, 2022 12:52:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:52:57.598Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 25, 2022 12:52:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:52:57.745Z: Cleaning up.
    Jan 25, 2022 12:52:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:52:57.830Z: Stopping worker pool...
    Jan 25, 2022 12:55:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:55:24.708Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 25, 2022 12:55:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-25T00:55:24.762Z: Worker pool stopped.
    Jan 25, 2022 12:55:29 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-24_16_50_53-16540613239875161240 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e1b6d874-d260-4e6c-ac26-704931a3c669 and timestamp: 2022-01-25T00:55:30.622000000Z:
                     Metric:                    Value:
                   read_time                     7.158
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 25, 2022 12:55:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.209 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.244 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 5 mins 59.993 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 10m 15s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ufiehbez3xuhw

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2953

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2953/display/redirect?page=changes>

Changes:

[mmack] [BEAM-13653] Make SnsIO.write topicArn optional. If provided, validate

[noreply] [BEAM-10897] Update the fastavro lower bound due to an issue on Windows

[noreply] [BEAM-13605] Update pandas_doctests_test denylists in preparation for

[noreply] Merge pull request #16538 from [BEAM-13676][Playground][Bugfix]Build Of

[noreply] Merge pull request #16582 from [BEAM-13711] [Playground] [Bugfix] Add


------------------------------------------
[...truncated 347.11 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5de997a2aa9b35540d6d2444e22e55d9
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 24, 2022 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 24, 2022 6:44:58 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 24, 2022 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 24, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 24, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 24, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@535517293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 24, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 24, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 24, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 24, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 24, 2022 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1842276496]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 24, 2022 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 24, 2022 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 24, 2022 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 24, 2022 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 24, 2022 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 24, 2022 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 24, 2022 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 24, 2022 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1176544110000178089.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-p3G-fbc3CVy3Ci1o5eMtIRxwfWNYES7RIhzDdsTWjvQ.jar
    Jan 24, 2022 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 1 seconds
    Jan 24, 2022 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 24, 2022 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash ff0a38f2ad9a5d498321192d2588eb4e1c022400e331ef8774fda7deb372c0a5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-_wo48q2aXUmDIRktJYjrThwCJADjMe-HdP2n3rNywKU.pb
    Jan 24, 2022 6:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 24, 2022 6:45:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 24, 2022 6:45:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 24, 2022 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 24, 2022 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-24_10_45_15-9089627936229320318?project=apache-beam-testing
    Jan 24, 2022 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-24_10_45_15-9089627936229320318
    Jan 24, 2022 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-24_10_45_15-9089627936229320318
    Jan 24, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-24T18:45:18.891Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 24, 2022 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:45:28.189Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 24, 2022 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:45:28.962Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 24, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:45:29.016Z: Expanding GroupByKey operations into optimizable parts.
    Jan 24, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:45:29.045Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 24, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:45:29.134Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 24, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:45:29.167Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 24, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:45:29.196Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 24, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:45:29.637Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 24, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:45:29.752Z: Starting 5 workers in us-central1-f...
    Jan 24, 2022 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:45:38.816Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 24, 2022 6:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:46:21.843Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 24, 2022 6:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:46:47.448Z: Workers have started successfully.
    Jan 24, 2022 6:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:46:47.571Z: Workers have started successfully.
    Jan 24, 2022 6:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:47:21.916Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 24, 2022 6:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:47:22.129Z: Cleaning up.
    Jan 24, 2022 6:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:47:22.248Z: Stopping worker pool...
    Jan 24, 2022 6:50:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:50:07.876Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 24, 2022 6:50:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T18:50:07.931Z: Worker pool stopped.
    Jan 24, 2022 6:50:18 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-24_10_45_15-9089627936229320318 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6e8ec26a-f8cf-4814-8bec-1d4d2ed50ee4 and timestamp: 2022-01-24T18:50:18.854000000Z:
                     Metric:                    Value:
                   read_time                     8.393
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 24, 2022 6:50:18 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 5 mins 25.656 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 57s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/vmgvt736ow7pc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2952

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2952/display/redirect>

Changes:


------------------------------------------
[...truncated 347.23 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 24, 2022 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 24, 2022 12:44:58 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 24, 2022 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 24, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 24, 2022 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 24, 2022 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 24, 2022 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 24, 2022 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 24, 2022 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 24, 2022 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 24, 2022 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@715099463]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 24, 2022 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 24, 2022 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 24, 2022 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 24, 2022 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 24, 2022 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 24, 2022 12:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 24, 2022 12:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 24, 2022 12:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3031032903454795732.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-pjJmFy4mX2Vir9GKlb6bhnalHSkRNjp16T7CFl5Xb5g.jar
    Jan 24, 2022 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 24, 2022 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 24, 2022 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142352 bytes, hash 255227c516cbe0548b5461af57a04013b43e8824af53304f88465996bc29285f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-JVInxRbL4FSLVGGvV6BAE7Q-iCSvUzBPiEZZlrwpKF8.pb
    Jan 24, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 24, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 24, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 24, 2022 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 24, 2022 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-24_04_45_16-6820238411075223580?project=apache-beam-testing
    Jan 24, 2022 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-24_04_45_16-6820238411075223580
    Jan 24, 2022 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-24_04_45_16-6820238411075223580
    Jan 24, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-24T12:45:20.318Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 24, 2022 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:45:30.029Z: Worker configuration: e2-standard-2 in us-central1-c.
    Jan 24, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:45:30.942Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 24, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:45:30.980Z: Expanding GroupByKey operations into optimizable parts.
    Jan 24, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:45:31.007Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 24, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:45:31.082Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 24, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:45:31.111Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 24, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:45:31.148Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 24, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:45:31.482Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 24, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:45:31.591Z: Starting 5 workers in us-central1-c...
    Jan 24, 2022 12:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:46:00.432Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 24, 2022 12:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:46:28.297Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 24, 2022 12:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:46:41.086Z: Workers have started successfully.
    Jan 24, 2022 12:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:46:41.121Z: Workers have started successfully.
    Jan 24, 2022 12:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:47:10.586Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 24, 2022 12:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:47:10.727Z: Cleaning up.
    Jan 24, 2022 12:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:47:10.790Z: Stopping worker pool...
    Jan 24, 2022 12:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:49:30.686Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 24, 2022 12:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T12:49:30.731Z: Worker pool stopped.
    Jan 24, 2022 12:49:38 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-24_04_45_16-6820238411075223580 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 852f53cf-fc2f-44b0-af44-3b5f2310ef10 and timestamp: 2022-01-24T12:49:38.306000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.097

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 24, 2022 12:49:38 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 45.135 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 16s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/24cm3d2jsn7wo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2951

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2951/display/redirect>

Changes:


------------------------------------------
[...truncated 346.02 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5de997a2aa9b35540d6d2444e22e55d9
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 24, 2022 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 24, 2022 6:45:00 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 24, 2022 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 24, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 24, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 24, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 24, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 24, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 24, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 24, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 24, 2022 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1460265227]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 24, 2022 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 24, 2022 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 24, 2022 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 24, 2022 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 24, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 24, 2022 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 24, 2022 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 24, 2022 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8113314229874026062.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Yc-vjjpkdcQyzcf4G0DJkXtZ2o5SdLVsSyGPLwWQALc.jar
    Jan 24, 2022 6:45:15 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 1 seconds
    Jan 24, 2022 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 24, 2022 6:45:15 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash be88523c6a8c8c1fedfc1e62a91e2d1cec7ae1a45aff1b5e71d93a1b5406df1c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-vohSPGqMjB_t_B5iqR4tHOx64aRa_xtecdk6G1QG3xw.pb
    Jan 24, 2022 6:45:19 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 24, 2022 6:45:19 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 24, 2022 6:45:19 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 24, 2022 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 24, 2022 6:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-23_22_45_20-6051589948273445251?project=apache-beam-testing
    Jan 24, 2022 6:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-23_22_45_20-6051589948273445251
    Jan 24, 2022 6:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-23_22_45_20-6051589948273445251
    Jan 24, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-24T06:45:22.302Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 24, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:45:30.227Z: Worker configuration: e2-standard-2 in us-central1-c.
    Jan 24, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:45:30.869Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 24, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:45:30.905Z: Expanding GroupByKey operations into optimizable parts.
    Jan 24, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:45:30.932Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 24, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:45:31.002Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 24, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:45:31.031Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 24, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:45:31.063Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 24, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:45:31.383Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 24, 2022 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:45:31.469Z: Starting 5 workers in us-central1-c...
    Jan 24, 2022 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:45:35.220Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 24, 2022 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:46:16.020Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 24, 2022 6:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:46:43.940Z: Workers have started successfully.
    Jan 24, 2022 6:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:46:43.972Z: Workers have started successfully.
    Jan 24, 2022 6:47:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:47:14.662Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 24, 2022 6:47:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:47:14.818Z: Cleaning up.
    Jan 24, 2022 6:47:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:47:14.911Z: Stopping worker pool...
    Jan 24, 2022 6:49:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:49:38.891Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 24, 2022 6:49:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T06:49:38.935Z: Worker pool stopped.
    Jan 24, 2022 6:49:44 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-23_22_45_20-6051589948273445251 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 19da81e1-1c14-4d78-a018-85459a21eb62 and timestamp: 2022-01-24T06:49:44.887000000Z:
                     Metric:                    Value:
                   read_time                     7.306
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 24, 2022 6:49:44 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 49.177 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 20s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/glwfp3fcexgqs

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2950

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2950/display/redirect>

Changes:


------------------------------------------
[...truncated 346.33 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5de997a2aa9b35540d6d2444e22e55d9
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 24, 2022 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 24, 2022 12:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 24, 2022 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 24, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 24, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 24, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 24, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 24, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 24, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 24, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 24, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1361671464]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 24, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 24, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 24, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 24, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 24, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 24, 2022 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 24, 2022 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 24, 2022 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 24, 2022 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 24, 2022 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test347736031686455471.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-fWR0eGe_p_IYFnRWqbu5ij58EmTgvUfWq4aa8xxZpxw.jar
    Jan 24, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 1 seconds
    Jan 24, 2022 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 24, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash ed2785d742562200b851ea6178d5f1e8a848d48b39fdff5564c4a90313eb35ca> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-7SeF10JWIgC4UepheNXx6KhI1Is5_f9VZMSpAxPrNco.pb
    Jan 24, 2022 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 24, 2022 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 24, 2022 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 24, 2022 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 24, 2022 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-23_16_45_12-11602314698835650124?project=apache-beam-testing
    Jan 24, 2022 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-23_16_45_12-11602314698835650124
    Jan 24, 2022 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-23_16_45_12-11602314698835650124
    Jan 24, 2022 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-24T00:45:16.162Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 24, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:45:25.594Z: Worker configuration: e2-standard-2 in us-central1-c.
    Jan 24, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:45:26.380Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 24, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:45:26.441Z: Expanding GroupByKey operations into optimizable parts.
    Jan 24, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:45:26.467Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 24, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:45:26.531Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 24, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:45:26.597Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 24, 2022 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:45:26.622Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 24, 2022 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:45:27.122Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 24, 2022 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:45:27.227Z: Starting 5 workers in us-central1-c...
    Jan 24, 2022 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:45:44.493Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 24, 2022 12:46:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:46:11.001Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 24, 2022 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:46:36.567Z: Workers have started successfully.
    Jan 24, 2022 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:46:36.611Z: Workers have started successfully.
    Jan 24, 2022 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:47:04.944Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 24, 2022 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:47:05.104Z: Cleaning up.
    Jan 24, 2022 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:47:05.168Z: Stopping worker pool...
    Jan 24, 2022 12:49:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:49:34.223Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 24, 2022 12:49:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-24T00:49:34.268Z: Worker pool stopped.
    Jan 24, 2022 12:49:39 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-23_16_45_12-11602314698835650124 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 82b7b3d8-2958-402b-b064-f5f8be2c8d16 and timestamp: 2022-01-24T00:49:39.716000000Z:
                     Metric:                    Value:
                   read_time                     5.926
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 24, 2022 12:49:39 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 48.5 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 18s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/tiyjmjtup6j64

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2949

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2949/display/redirect>

Changes:


------------------------------------------
[...truncated 348.24 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 23, 2022 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 23, 2022 6:44:57 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 23, 2022 6:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 23, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 23, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 23, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 23, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 23, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 23, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 23, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 23, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@715099463]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 23, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 23, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 23, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 23, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 23, 2022 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 23, 2022 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 23, 2022 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 23, 2022 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8630416796069860821.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-rJfn_H5ipQjH2fE4MR1pl28a6Qp1Eub2hA-bgu4U6EE.jar
    Jan 23, 2022 6:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 23, 2022 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 23, 2022 6:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142350 bytes, hash 8b0c0aca852d883c21c38901ed895ae06b59b74e1bec793cd5c8309a32c5e52a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-iwwKyoUtiDwhw4kB7Yla4GtZt04b7Hk81cgwmjLF5So.pb
    Jan 23, 2022 6:45:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 23, 2022 6:45:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 23, 2022 6:45:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 23, 2022 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 23, 2022 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-23_10_45_15-5607719481908048675?project=apache-beam-testing
    Jan 23, 2022 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-23_10_45_15-5607719481908048675
    Jan 23, 2022 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-23_10_45_15-5607719481908048675
    Jan 23, 2022 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-23T18:45:19.209Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 23, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:45:31.974Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 23, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:45:33.022Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 23, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:45:33.062Z: Expanding GroupByKey operations into optimizable parts.
    Jan 23, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:45:33.087Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 23, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:45:33.158Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 23, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:45:33.181Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 23, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:45:33.198Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 23, 2022 6:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:45:33.539Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 23, 2022 6:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:45:33.650Z: Starting 5 workers in us-central1-f...
    Jan 23, 2022 6:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:46:04.467Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 23, 2022 6:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:46:24.320Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 23, 2022 6:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:46:48.847Z: Workers have started successfully.
    Jan 23, 2022 6:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:46:48.877Z: Workers have started successfully.
    Jan 23, 2022 6:47:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:47:21.485Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 23, 2022 6:47:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:47:21.657Z: Cleaning up.
    Jan 23, 2022 6:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:47:21.738Z: Stopping worker pool...
    Jan 23, 2022 6:49:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:49:53.020Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 23, 2022 6:49:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T18:49:53.067Z: Worker pool stopped.
    Jan 23, 2022 6:49:59 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-23_10_45_15-5607719481908048675 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c108b1ff-d3ab-47d4-b7dc-c314d0acc4c9 and timestamp: 2022-01-23T18:49:59.597000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      8.87

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 23, 2022 6:49:59 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 7.84 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 40s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/mgvoryjkl3wqo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2948

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2948/display/redirect>

Changes:


------------------------------------------
[...truncated 352.73 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 23, 2022 12:47:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 23, 2022 12:47:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 23, 2022 12:47:57 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 23, 2022 12:48:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 12:48:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 12:48:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 23, 2022 12:48:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 12:48:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 12:48:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 23, 2022 12:48:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1106513402]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 23, 2022 12:48:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 23, 2022 12:48:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 12:48:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 23, 2022 12:48:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 23, 2022 12:48:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 12:48:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 23, 2022 12:48:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1446787786]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 23, 2022 12:48:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 12:48:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 12:48:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 23, 2022 12:48:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 12:48:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 12:48:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 23, 2022 12:48:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 23, 2022 12:48:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 23, 2022 12:48:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 23, 2022 12:48:39 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 23, 2022 12:48:39 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 23, 2022 12:48:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5273193058881539768.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-YLD7LtWGv6hKkcT0lpFVCXS502uHlyfKnZ9m95gYWsw.jar
    Jan 23, 2022 12:48:47 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 8 seconds
    Jan 23, 2022 12:48:47 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 23, 2022 12:48:48 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash 92b8673c5fe37e67f9c58eb85b4ab3c004061da73dbe1640a8e674f36ca216be> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-krhnPF_jfmf5xY64W0qzwAQGHac9vhZAqOZ082yiFr4.pb
    Jan 23, 2022 12:48:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 23, 2022 12:48:56 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 23, 2022 12:48:56 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 23, 2022 12:48:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 23, 2022 12:48:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-23_04_48_57-8736407557399851552?project=apache-beam-testing
    Jan 23, 2022 12:48:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-23_04_48_57-8736407557399851552
    Jan 23, 2022 12:48:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-23_04_48_57-8736407557399851552
    Jan 23, 2022 12:49:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-23T12:49:00.948Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 23, 2022 12:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:49:16.965Z: Worker configuration: e2-standard-2 in us-central1-c.
    Jan 23, 2022 12:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:49:17.745Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 23, 2022 12:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:49:17.775Z: Expanding GroupByKey operations into optimizable parts.
    Jan 23, 2022 12:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:49:17.802Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 23, 2022 12:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:49:17.869Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 23, 2022 12:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:49:17.891Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 23, 2022 12:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:49:17.921Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 23, 2022 12:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:49:18.267Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 23, 2022 12:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:49:18.343Z: Starting 5 workers in us-central1-c...
    Jan 23, 2022 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:49:21.699Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 23, 2022 12:50:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:50:02.826Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 23, 2022 12:50:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:50:27.389Z: Workers have started successfully.
    Jan 23, 2022 12:50:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:50:27.411Z: Workers have started successfully.
    Jan 23, 2022 12:50:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:50:54.434Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 23, 2022 12:50:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:50:54.581Z: Cleaning up.
    Jan 23, 2022 12:50:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:50:54.702Z: Stopping worker pool...
    Jan 23, 2022 12:53:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:53:20.694Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 23, 2022 12:53:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T12:53:20.756Z: Worker pool stopped.
    Jan 23, 2022 12:53:29 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-23_04_48_57-8736407557399851552 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): eb278255-7698-4f12-80f7-f3ee7458e3c3 and timestamp: 2022-01-23T12:53:29.934000000Z:
                     Metric:                    Value:
                   read_time                     7.114
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 23, 2022 12:53:30 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.123 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.122 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 5 mins 56.933 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 19s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/t3cjiei4v2gpa

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2947

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2947/display/redirect>

Changes:


------------------------------------------
[...truncated 346.43 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5de997a2aa9b35540d6d2444e22e55d9
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 23, 2022 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 23, 2022 6:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 23, 2022 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 23, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 23, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 23, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@715099463]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 23, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 23, 2022 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 23, 2022 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 23, 2022 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 23, 2022 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7257503220360052013.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-3d381FZj0Jz_m3anez-MrTHxEDxZEdgxCQJdCxjpFDM.jar
    Jan 23, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 23, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 23, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142347 bytes, hash c414fd60b91b66bf78829b633a5b2d0670c9590d56faed594a4dd0858b2035a0> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-xBT9YLkbZr94gptjOlstBnDJWQ1W-u1ZSk3QhYsgNaA.pb
    Jan 23, 2022 6:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 23, 2022 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 23, 2022 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 23, 2022 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 23, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-22_22_45_13-6071978645878817333?project=apache-beam-testing
    Jan 23, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-22_22_45_13-6071978645878817333
    Jan 23, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-22_22_45_13-6071978645878817333
    Jan 23, 2022 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-23T06:45:17.270Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:45:24.758Z: Worker configuration: e2-standard-2 in us-central1-b.
    Jan 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:45:25.297Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:45:25.336Z: Expanding GroupByKey operations into optimizable parts.
    Jan 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:45:25.362Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:45:25.440Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:45:25.476Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 23, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:45:25.509Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 23, 2022 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:45:25.893Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 23, 2022 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:45:26.092Z: Starting 5 workers in us-central1-b...
    Jan 23, 2022 6:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:45:32.333Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 23, 2022 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:46:16.227Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 23, 2022 6:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:46:45.288Z: Workers have started successfully.
    Jan 23, 2022 6:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:46:45.315Z: Workers have started successfully.
    Jan 23, 2022 6:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:47:18.476Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 23, 2022 6:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:47:18.643Z: Cleaning up.
    Jan 23, 2022 6:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:47:18.715Z: Stopping worker pool...
    Jan 23, 2022 6:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:49:50.754Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 23, 2022 6:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T06:49:50.793Z: Worker pool stopped.
    Jan 23, 2022 6:49:57 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-22_22_45_13-6071978645878817333 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3ca85cd1-a688-410a-9677-4ba5c0181d3d and timestamp: 2022-01-23T06:49:57.988000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.648

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 23, 2022 6:49:58 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 6.65 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 38s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/7dfvuvisu7ubw

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2946

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2946/display/redirect>

Changes:


------------------------------------------
[...truncated 346.02 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5de997a2aa9b35540d6d2444e22e55d9
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 23, 2022 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 23, 2022 12:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 23, 2022 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 23, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 23, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 23, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 23, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 23, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 23, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 23, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 23, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@715099463]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 23, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 23, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 23, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 23, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 23, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 23, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 23, 2022 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 23, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 23, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 23, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4335044897762522824.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-NtnSKxoA4qeLJuJeX9uQrZ8Z0kEhcVqa9fLdfs8Kyl0.jar
    Jan 23, 2022 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 23, 2022 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 23, 2022 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash 9c7ce0bb59ae99af11cd38fe2b6a2b14a5bbcf6c206837fa1931cabd9ea4256f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-nHzgu1muma8RzTj-K2orFKW7z2wgaDf6GTHKvZ6kJW8.pb
    Jan 23, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 23, 2022 12:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 23, 2022 12:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 23, 2022 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 23, 2022 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-22_16_45_07-11224779981516876836?project=apache-beam-testing
    Jan 23, 2022 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-22_16_45_07-11224779981516876836
    Jan 23, 2022 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-22_16_45_07-11224779981516876836
    Jan 23, 2022 12:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-23T00:45:11.376Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 23, 2022 12:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:45:19.976Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 23, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:45:20.689Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 23, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:45:20.732Z: Expanding GroupByKey operations into optimizable parts.
    Jan 23, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:45:20.758Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 23, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:45:20.819Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 23, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:45:20.844Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 23, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:45:20.887Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 23, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:45:21.270Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 23, 2022 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:45:21.335Z: Starting 5 workers in us-central1-f...
    Jan 23, 2022 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:45:31.249Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 23, 2022 12:46:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:46:11.048Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 23, 2022 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:46:35.027Z: Workers have started successfully.
    Jan 23, 2022 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:46:35.049Z: Workers have started successfully.
    Jan 23, 2022 12:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:47:07.068Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 23, 2022 12:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:47:07.214Z: Cleaning up.
    Jan 23, 2022 12:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:47:07.347Z: Stopping worker pool...
    Jan 23, 2022 12:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:49:38.294Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 23, 2022 12:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-23T00:49:38.334Z: Worker pool stopped.
    Jan 23, 2022 12:49:44 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-22_16_45_07-11224779981516876836 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6d130c3e-4931-45b1-8744-9e3f6484e512 and timestamp: 2022-01-23T00:49:44.755000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      7.25

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 23, 2022 12:49:44 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 57.487 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 25s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/y4bfkjre4qmju

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2945

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2945/display/redirect>

Changes:


------------------------------------------
[...truncated 348.56 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 22, 2022 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 22, 2022 6:45:09 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 22, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 22, 2022 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 6:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 22, 2022 6:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 6:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 6:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 22, 2022 6:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 22, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 22, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 22, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 22, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 22, 2022 6:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1460265227]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 22, 2022 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 6:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 22, 2022 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 6:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 22, 2022 6:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 22, 2022 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 22, 2022 6:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 22, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 22, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 22, 2022 6:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4826808299567571488.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-42DoxFiJOW4pa_V_2WATD_jnOSPHE5ZdHoJEA5koZYg.jar
    Jan 22, 2022 6:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 1 seconds
    Jan 22, 2022 6:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 22, 2022 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash 6aaeb747021fac38f492ead6f7b50e896961f455e05f99fc40c784a718eb43ab> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-aq63RwIfrDj0kurW97UOiWlh9FXgX5n8QMeEpxjrQ6s.pb
    Jan 22, 2022 6:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 22, 2022 6:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 22, 2022 6:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 22, 2022 6:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 22, 2022 6:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-22_10_45_29-9207988050840959245?project=apache-beam-testing
    Jan 22, 2022 6:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-22_10_45_29-9207988050840959245
    Jan 22, 2022 6:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-22_10_45_29-9207988050840959245
    Jan 22, 2022 6:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-22T18:45:33.255Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 22, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:45:48.019Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 22, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:45:48.578Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 22, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:45:48.633Z: Expanding GroupByKey operations into optimizable parts.
    Jan 22, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:45:48.657Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 22, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:45:48.744Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 22, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:45:48.778Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 22, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:45:48.813Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 22, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:45:49.299Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 22, 2022 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:45:49.372Z: Starting 5 workers in us-central1-f...
    Jan 22, 2022 6:46:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:46:14.037Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 22, 2022 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:46:37.736Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 22, 2022 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:47:01.799Z: Workers have started successfully.
    Jan 22, 2022 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:47:01.836Z: Workers have started successfully.
    Jan 22, 2022 6:47:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:47:31.635Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 22, 2022 6:47:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:47:31.779Z: Cleaning up.
    Jan 22, 2022 6:47:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:47:31.848Z: Stopping worker pool...
    Jan 22, 2022 6:50:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:50:04.146Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 22, 2022 6:50:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T18:50:04.197Z: Worker pool stopped.
    Jan 22, 2022 6:50:11 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-22_10_45_29-9207988050840959245 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b47d7a24-eeec-4087-9b65-83b7030a93a2 and timestamp: 2022-01-22T18:50:11.676000000Z:
                     Metric:                    Value:
                   read_time                     8.118
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 22, 2022 6:50:11 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 9.148 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 48s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/hkneel24n2566

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2944

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2944/display/redirect>

Changes:


------------------------------------------
[...truncated 349.08 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5de997a2aa9b35540d6d2444e22e55d9
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 22, 2022 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 22, 2022 12:45:10 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 22, 2022 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 22, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 22, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 22, 2022 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 22, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 22, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 22, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 22, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 22, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1361671464]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 22, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 22, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 12:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 22, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 22, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 22, 2022 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 22, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 22, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 22, 2022 12:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4924570916359436187.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-CaJ4CPyTCCexOfLQrzdf_JWAUVqYTz-FnE4yOHpAUI0.jar
    Jan 22, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 1 seconds
    Jan 22, 2022 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 22, 2022 12:45:23 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142344 bytes, hash 27e854f71ffaea5b23ecca3aae0cd32115ff000a8b4e8303b54271b4295e3fac> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-J-hU9x_66lsj7Mo6rgzTIRX_AAqLToMDtUJxtCleP6w.pb
    Jan 22, 2022 12:45:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 22, 2022 12:45:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 22, 2022 12:45:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 22, 2022 12:45:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 22, 2022 12:45:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-22_04_45_27-956748622524608102?project=apache-beam-testing
    Jan 22, 2022 12:45:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-22_04_45_27-956748622524608102
    Jan 22, 2022 12:45:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-22_04_45_27-956748622524608102
    Jan 22, 2022 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-22T12:45:30.738Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 22, 2022 12:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:45:45.546Z: Worker configuration: e2-standard-2 in us-central1-b.
    Jan 22, 2022 12:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:45:48.184Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 22, 2022 12:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:45:48.223Z: Expanding GroupByKey operations into optimizable parts.
    Jan 22, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:45:48.264Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 22, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:45:48.349Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 22, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:45:48.423Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 22, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:45:48.459Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 22, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:45:48.848Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 22, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:45:48.935Z: Starting 5 workers in us-central1-b...
    Jan 22, 2022 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:46:11.721Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 22, 2022 12:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:46:41.357Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 22, 2022 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:47:06.145Z: Workers have started successfully.
    Jan 22, 2022 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:47:06.184Z: Workers have started successfully.
    Jan 22, 2022 12:47:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:47:38.786Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 22, 2022 12:47:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:47:38.944Z: Cleaning up.
    Jan 22, 2022 12:47:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:47:39.028Z: Stopping worker pool...
    Jan 22, 2022 12:50:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:50:10.289Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 22, 2022 12:50:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T12:50:10.377Z: Worker pool stopped.
    Jan 22, 2022 12:50:16 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-22_04_45_27-956748622524608102 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 886259c1-9a4a-4d0b-871a-b7e17ee387fd and timestamp: 2022-01-22T12:50:16.890000000Z:
                     Metric:                    Value:
                   read_time                    10.034
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 22, 2022 12:50:17 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 5 mins 11.14 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 58s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/bbu7wvggciviw

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2943

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2943/display/redirect>

Changes:


------------------------------------------
[...truncated 345.98 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5de997a2aa9b35540d6d2444e22e55d9
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 22, 2022 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 22, 2022 6:44:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 22, 2022 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 22, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 22, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 22, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@894735295]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 22, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 22, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 22, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 22, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 22, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1782113233]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 22, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 22, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 22, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 22, 2022 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 22, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 22, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 22, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 22, 2022 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4523724690584371777.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Q5QqzGAmGiaPza03awEVMNZgxuthYYLZyzixqnv2lcA.jar
    Jan 22, 2022 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 22, 2022 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 22, 2022 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash 038799cd42864d036d3ab0dcb77e1b3ea511ad21d3d31f6512f5ec2dd3a332a2> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-A4eZzUKGTQNtOrDct34bPqURrSHT0x9lEvXsLdOjMqI.pb
    Jan 22, 2022 6:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 22, 2022 6:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 22, 2022 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 22, 2022 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 22, 2022 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-21_22_45_17-2471124139250026551?project=apache-beam-testing
    Jan 22, 2022 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-21_22_45_17-2471124139250026551
    Jan 22, 2022 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-21_22_45_17-2471124139250026551
    Jan 22, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-22T06:45:20.592Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 22, 2022 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:45:29.155Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 22, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:45:29.900Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 22, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:45:29.982Z: Expanding GroupByKey operations into optimizable parts.
    Jan 22, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:45:30.018Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 22, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:45:30.091Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 22, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:45:30.129Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 22, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:45:30.165Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 22, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:45:30.499Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 22, 2022 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:45:30.561Z: Starting 5 workers in us-central1-f...
    Jan 22, 2022 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:45:51.923Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 22, 2022 6:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:46:19.764Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 22, 2022 6:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:46:42.882Z: Workers have started successfully.
    Jan 22, 2022 6:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:46:42.908Z: Workers have started successfully.
    Jan 22, 2022 6:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:47:14.378Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 22, 2022 6:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:47:14.543Z: Cleaning up.
    Jan 22, 2022 6:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:47:14.623Z: Stopping worker pool...
    Jan 22, 2022 6:49:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:49:44.378Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 22, 2022 6:49:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T06:49:44.440Z: Worker pool stopped.
    Jan 22, 2022 6:49:50 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-21_22_45_17-2471124139250026551 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6b0f6c35-bc5f-4e0d-b5d0-3aa6f66a7925 and timestamp: 2022-01-22T06:49:50.872000000Z:
                     Metric:                    Value:
                   read_time                     8.555
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 22, 2022 6:49:50 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 57.496 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/k6l4mshn4ai6k

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2942

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2942/display/redirect?page=changes>

Changes:

[ningkang0957] [BEAM-13687] Improved Spanner IO request count metrics

[noreply] [BEAM-10206] Add key for fields in wrapper (#16583)

[noreply] Merge pull request #16530 from Adding JSON support in SpannerIO and

[noreply] [BEAM-13685] Enable users to specify cache directory under Interactive


------------------------------------------
[...truncated 350.27 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5de997a2aa9b35540d6d2444e22e55d9
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 22, 2022 12:46:19 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 22, 2022 12:46:20 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 22, 2022 12:46:21 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 22, 2022 12:46:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 12:46:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 12:46:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 22, 2022 12:46:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 12:46:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 12:46:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 22, 2022 12:46:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 22, 2022 12:46:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 22, 2022 12:46:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 12:46:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 22, 2022 12:46:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 22, 2022 12:46:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 12:46:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 22, 2022 12:46:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@715099463]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 22, 2022 12:46:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 12:46:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 12:46:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 22, 2022 12:46:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 22, 2022 12:46:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 22, 2022 12:46:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 22, 2022 12:46:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 22, 2022 12:46:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 22, 2022 12:46:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 22, 2022 12:46:32 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 22, 2022 12:46:32 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 22, 2022 12:46:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8344948142792337819.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-or9ztjHMe4lGGzC9F7wQ6WM0Yn1yInx6ugi-7KYIiTY.jar
    Jan 22, 2022 12:46:33 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 22, 2022 12:46:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 22, 2022 12:46:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash c577ba1612cdbe5d74a3ca2fc0315fa0f1faf1c09d81741a99baafa9507708ed> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-xXe6FhLNvl10o8ovwDFfoPH68cCdgXQambqvqVB3CO0.pb
    Jan 22, 2022 12:46:36 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 22, 2022 12:46:37 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 22, 2022 12:46:37 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 22, 2022 12:46:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 22, 2022 12:46:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-21_16_46_37-15347821663582203420?project=apache-beam-testing
    Jan 22, 2022 12:46:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-21_16_46_37-15347821663582203420
    Jan 22, 2022 12:46:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-21_16_46_37-15347821663582203420
    Jan 22, 2022 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-22T00:46:40.845Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 22, 2022 12:46:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:46:48.602Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 22, 2022 12:46:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:46:49.248Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 22, 2022 12:46:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:46:49.294Z: Expanding GroupByKey operations into optimizable parts.
    Jan 22, 2022 12:46:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:46:49.331Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 22, 2022 12:46:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:46:49.399Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 22, 2022 12:46:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:46:49.426Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 22, 2022 12:46:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:46:49.471Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 22, 2022 12:46:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:46:49.851Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 22, 2022 12:46:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:46:49.917Z: Starting 5 workers in us-central1-f...
    Jan 22, 2022 12:47:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:47:20.599Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 22, 2022 12:47:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:47:41.529Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 22, 2022 12:48:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:48:05.317Z: Workers have started successfully.
    Jan 22, 2022 12:48:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:48:05.355Z: Workers have started successfully.
    Jan 22, 2022 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:48:39.871Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 22, 2022 12:48:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:48:40.011Z: Cleaning up.
    Jan 22, 2022 12:48:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:48:40.111Z: Stopping worker pool...
    Jan 22, 2022 12:51:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:51:11.739Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 22, 2022 12:51:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-22T00:51:11.788Z: Worker pool stopped.
    Jan 22, 2022 12:51:18 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-21_16_46_37-15347821663582203420 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 660f555e-603a-4042-941f-20caa0cc2958 and timestamp: 2022-01-22T00:51:18.685000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.897

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 22, 2022 12:51:18 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 3.254 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 56s
165 actionable tasks: 104 executed, 59 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/okkjffuspojxu

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2941

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2941/display/redirect>

Changes:


------------------------------------------
[...truncated 346.82 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e4c4c20bcd2f77205828e4b7e72911f2
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 21, 2022 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 21, 2022 6:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 21, 2022 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 21, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 21, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 21, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 21, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 21, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 21, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 21, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 21, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1361671464]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 21, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 21, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 21, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 21, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 21, 2022 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 21, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 21, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 21, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7399605339452431814.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-j8yyaZV_s_kTfjAld_BpXpmSubtsduiL4yWxHFMZhAk.jar
    Jan 21, 2022 6:45:15 PM org.apache.beam.sdk.metrics.MetricsEnvironment getCurrentContainer
    WARNING: Reporting metrics are not supported in the current execution environment.
    Jan 21, 2022 6:45:15 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 11 seconds
    Jan 21, 2022 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 21, 2022 6:45:15 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash 90a047352e4cd17b50b825b229a3979ea9ba3fb78f52d68062c9948023207b07> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-kKBHNS5M0XtQuCWyKaOXnqm6P7ePUtaAYsmUgCMgewc.pb
    Jan 21, 2022 6:45:19 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 21, 2022 6:45:19 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 21, 2022 6:45:19 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 21, 2022 6:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 21, 2022 6:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-21_10_45_19-8126250257106599321?project=apache-beam-testing
    Jan 21, 2022 6:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-21_10_45_19-8126250257106599321
    Jan 21, 2022 6:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-21_10_45_19-8126250257106599321
    Jan 21, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-21T18:45:23.322Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 21, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:45:32.438Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 21, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:45:33.067Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 21, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:45:33.170Z: Expanding GroupByKey operations into optimizable parts.
    Jan 21, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:45:33.256Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 21, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:45:33.339Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 21, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:45:33.383Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 21, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:45:33.428Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 21, 2022 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:45:33.828Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 21, 2022 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:45:33.922Z: Starting 5 workers in us-central1-f...
    Jan 21, 2022 6:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:45:40.247Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 21, 2022 6:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:46:20.734Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 21, 2022 6:46:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:46:45.174Z: Workers have started successfully.
    Jan 21, 2022 6:46:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:46:45.203Z: Workers have started successfully.
    Jan 21, 2022 6:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:47:17.925Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 21, 2022 6:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:47:18.098Z: Cleaning up.
    Jan 21, 2022 6:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:47:18.199Z: Stopping worker pool...
    Jan 21, 2022 6:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:49:43.612Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 21, 2022 6:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T18:49:43.650Z: Worker pool stopped.
    Jan 21, 2022 6:49:49 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-21_10_45_19-8126250257106599321 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8cd35306-5a5b-451d-a47b-e6f848d6d46c and timestamp: 2022-01-21T18:49:49.292000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.088

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 21, 2022 6:49:49 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 5 mins 0.395 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/66xsjwxgsaftu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2940

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2940/display/redirect>

Changes:


------------------------------------------
[...truncated 349.62 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e4c4c20bcd2f77205828e4b7e72911f2
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 21, 2022 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 21, 2022 12:45:23 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 21, 2022 12:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 21, 2022 12:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 12:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 21, 2022 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 21, 2022 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 21, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 21, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 21, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 21, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 21, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1361671464]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 21, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 12:45:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 21, 2022 12:45:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 12:45:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 12:45:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 21, 2022 12:45:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 21, 2022 12:45:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 21, 2022 12:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 21, 2022 12:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 21, 2022 12:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 21, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test369857038791404443.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-8OVjReBZRdxte6DgwBQUCxvcdRuyEXxot4ccAzKKja8.jar
    Jan 21, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 21, 2022 12:45:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 21, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash d228f420161b0e0dd307b156393d4999e934494250e5a1ed44492c10ce6e3136> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-0ij0IBYbDg3TB7FWOT1Jmek0SUJQ5aHtREksEM5uMTY.pb
    Jan 21, 2022 12:45:38 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 21, 2022 12:45:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 21, 2022 12:45:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 21, 2022 12:45:39 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 21, 2022 12:45:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-21_04_45_39-10957040200667635562?project=apache-beam-testing
    Jan 21, 2022 12:45:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-21_04_45_39-10957040200667635562
    Jan 21, 2022 12:45:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-21_04_45_39-10957040200667635562
    Jan 21, 2022 12:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-21T12:45:42.690Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 21, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:45:51.583Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 21, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:45:52.298Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 21, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:45:52.335Z: Expanding GroupByKey operations into optimizable parts.
    Jan 21, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:45:52.363Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 21, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:45:52.456Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 21, 2022 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:45:52.489Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 21, 2022 12:45:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:45:52.527Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 21, 2022 12:45:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:45:52.952Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 21, 2022 12:45:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:45:53.034Z: Starting 5 workers in us-central1-f...
    Jan 21, 2022 12:46:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:46:08.975Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 21, 2022 12:46:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:46:44.101Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 21, 2022 12:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:47:09.598Z: Workers have started successfully.
    Jan 21, 2022 12:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:47:09.632Z: Workers have started successfully.
    Jan 21, 2022 12:47:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:47:41.363Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 21, 2022 12:47:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:47:41.536Z: Cleaning up.
    Jan 21, 2022 12:47:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:47:41.614Z: Stopping worker pool...
    Jan 21, 2022 12:50:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:50:11.427Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 21, 2022 12:50:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T12:50:11.517Z: Worker pool stopped.
    Jan 21, 2022 12:50:19 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-21_04_45_39-10957040200667635562 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6d46365d-3229-4ffc-ae28-c5065e3167cc and timestamp: 2022-01-21T12:50:19.078000000Z:
                     Metric:                    Value:
                   read_time                     8.624
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 21, 2022 12:50:19 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 5 mins 0.176 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 56s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ryfatssvivvz6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2939

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2939/display/redirect?page=changes>

Changes:

[thiagotnunes] [BEAM-12164]: Add SDF for reading change stream records

[sergey.kalinin] Update GH Actions to use proper variables names and proper triggers

[dhuntsperger] edited README and comments in Python multi-lang pipes examples

[noreply] Merge pull request #16540 from [BEAM-13678][Playground]Update Github

[noreply] Merge pull request #16546 from [BEAM-13661] [BEAM-13704] [Playground]

[noreply] Merge pull request #16369 from [BEAM-13558] [Playground] Hide the Graph


------------------------------------------
[...truncated 349.89 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e4c4c20bcd2f77205828e4b7e72911f2
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 21, 2022 6:46:14 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 21, 2022 6:46:15 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 21, 2022 6:46:16 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 21, 2022 6:46:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 6:46:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 6:46:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 21, 2022 6:46:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 6:46:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 6:46:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 21, 2022 6:46:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 21, 2022 6:46:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 21, 2022 6:46:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 6:46:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 21, 2022 6:46:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 21, 2022 6:46:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 6:46:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 21, 2022 6:46:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@715099463]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 21, 2022 6:46:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 6:46:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 6:46:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 21, 2022 6:46:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 6:46:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 6:46:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 21, 2022 6:46:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 21, 2022 6:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 21, 2022 6:46:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 21, 2022 6:46:27 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 21, 2022 6:46:27 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 21, 2022 6:46:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6967978403184745626.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SyDMDETTOrLesCdAnJlWUyLkwdiMOC5jpAaiEAqYqBQ.jar
    Jan 21, 2022 6:46:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 361 files cached, 1 files newly uploaded in 0 seconds
    Jan 21, 2022 6:46:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 21, 2022 6:46:28 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142349 bytes, hash 10307b366bc350321ba3220d9616e19e8b8894252cbef159d2fc2ce830d880ee> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-EDB7NmvDUDIboyINlhbhnouIlCUsvvFZ0vws6DDYgO4.pb
    Jan 21, 2022 6:46:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 21, 2022 6:46:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 21, 2022 6:46:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 21, 2022 6:46:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 21, 2022 6:46:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-20_22_46_31-12361165786932081960?project=apache-beam-testing
    Jan 21, 2022 6:46:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-20_22_46_31-12361165786932081960
    Jan 21, 2022 6:46:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-20_22_46_31-12361165786932081960
    Jan 21, 2022 6:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-21T06:46:36.141Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 21, 2022 6:46:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:46:47.900Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 21, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:46:48.727Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 21, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:46:48.765Z: Expanding GroupByKey operations into optimizable parts.
    Jan 21, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:46:48.803Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 21, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:46:48.869Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 21, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:46:48.898Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 21, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:46:48.931Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 21, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:46:49.267Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 21, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:46:49.346Z: Starting 5 workers in us-central1-f...
    Jan 21, 2022 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:47:05.157Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 21, 2022 6:47:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:47:41.056Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 21, 2022 6:48:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:48:06.833Z: Workers have started successfully.
    Jan 21, 2022 6:48:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:48:06.924Z: Workers have started successfully.
    Jan 21, 2022 6:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:48:38.521Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 21, 2022 6:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:48:38.723Z: Cleaning up.
    Jan 21, 2022 6:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:48:38.801Z: Stopping worker pool...
    Jan 21, 2022 6:51:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:51:08.891Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 21, 2022 6:51:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T06:51:08.953Z: Worker pool stopped.
    Jan 21, 2022 6:51:16 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-20_22_46_31-12361165786932081960 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): fa3826bf-fd17-4c91-80a7-a66cc07a316e and timestamp: 2022-01-21T06:51:16.111000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.183

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 21, 2022 6:51:16 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 5 mins 5.555 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 56s
165 actionable tasks: 104 executed, 59 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/p2ldlljlcnnli

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2938

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2938/display/redirect?page=changes>

Changes:

[tuyarer] [BEAM-13577] Beam Select's uniquifyNames function loses nullability of

[Steve Niemitz] [BEAM-13689] Output token elements when BQ batch writes complete.

[noreply] [BEAM-13699] Replace fnv with maphash. (#16573)

[noreply] [BEAM-13693] Bump

[noreply] [BEAM-10206] Remove Fatalf calls in non-test goroutines for

[noreply] [BEAM-13430] Re-add provided configuration (#16552)


------------------------------------------
[...truncated 415.91 KB...]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 21, 2022 1:01:41 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 21, 2022 1:01:42 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 21, 2022 1:01:43 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 361 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 21, 2022 1:01:46 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 1:01:46 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 1:01:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 21, 2022 1:01:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 1:01:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 1:01:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 21, 2022 1:01:49 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@457596841]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 21, 2022 1:01:50 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 21, 2022 1:01:50 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 1:01:50 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 21, 2022 1:01:50 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 21, 2022 1:01:50 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 1:01:50 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 21, 2022 1:01:50 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1460265227]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 21, 2022 1:01:50 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 1:01:50 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 1:01:50 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 21, 2022 1:01:50 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 21, 2022 1:01:50 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 21, 2022 1:01:50 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 21, 2022 1:01:51 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 21, 2022 1:01:51 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 21, 2022 1:01:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 21, 2022 1:01:56 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 362 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 21, 2022 1:01:56 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 21, 2022 1:01:57 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.37.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.37.0-SNAPSHOT-tests-FD0lEX1hz5x4oAa70CVi0hcX75UZWufQkv2AX2CkL4c.jar
    Jan 21, 2022 1:01:57 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8691348585700626823.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-1kvjP3xMmNFKL1DV8zAYnbhk0V-rpOayd2HwwjelpyI.jar
    Jan 21, 2022 1:01:57 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.dataformat/jackson-dataformat-xml/2.13.0/396634365e8439b27794fa94b571fdc03b4cf7be/jackson-dataformat-xml-2.13.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jackson-dataformat-xml-2.13.0-Yyu4WPqE_RYmgacP7BZkHcWrY2pzow2igQ0noIVmu7o.jar
    Jan 21, 2022 1:01:57 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.immutables/value/2.8.8/d99fa1e04af5a1fda42fa9412d68eb7fe17a1071/value-2.8.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/value-2.8.8-k_DQ4kEt4FHC2WzIqOzB87qR2WH_Mw81deYbQ6-miFA.jar
    Jan 21, 2022 1:01:57 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.woodstox/woodstox-core/6.2.6/d4a055af5dcadea8d1bded5b64bc7be5bb7fea95/woodstox-core-6.2.6.jar to gs://temp-storage-for-perf-tests/loadtests/staging/woodstox-core-6.2.6-7RMZid5VnxZ00HVSgs8J3XgTkSFBRgr4IvXvHJtaCuE.jar
    Jan 21, 2022 1:01:57 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mortbay.jetty/servlet-api/2.5-20081211/22bff70037e1e6fa7e6413149489552ee2064702/servlet-api-2.5-20081211.jar to gs://temp-storage-for-perf-tests/loadtests/staging/servlet-api-2.5-20081211-BodWCWmW_gD2BKw7ZnLW9mPcd36kqDBW4kDQRW535HI.jar
    Jan 21, 2022 1:01:58 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.inject.extensions/guice-servlet/3.0/610cde0e8da5a8b7d8efb8f0b8987466ffebaaf9/guice-servlet-3.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/guice-servlet-3.0-nnKkuFgoiNU8L0KX6TJ2o8FMgogBJEkPLaexap3xxhg.jar
    Jan 21, 2022 1:01:58 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 355 files cached, 7 files newly uploaded in 1 seconds
    Jan 21, 2022 1:01:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 21, 2022 1:01:58 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <142350 bytes, hash fa1ecb124af0c693fb52fffcf9256efb3e300b8fa20f9b45ed959dc659f4077c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline--h7LEkrwxpP7Uv_8-SVu-z4wC4-iD5tF7ZWdxln0B3w.pb
    Jan 21, 2022 1:02:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 21, 2022 1:02:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 21, 2022 1:02:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 21, 2022 1:02:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 21, 2022 1:02:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-20_17_02_02-8095200770289689794?project=apache-beam-testing
    Jan 21, 2022 1:02:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-20_17_02_02-8095200770289689794
    Jan 21, 2022 1:02:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-20_17_02_02-8095200770289689794
    Jan 21, 2022 1:02:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-21T01:02:05.989Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 21, 2022 1:02:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:02:15.437Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 21, 2022 1:02:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:02:16.415Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 21, 2022 1:02:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:02:16.456Z: Expanding GroupByKey operations into optimizable parts.
    Jan 21, 2022 1:02:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:02:16.488Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 21, 2022 1:02:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:02:16.562Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 21, 2022 1:02:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:02:16.604Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 21, 2022 1:02:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:02:16.637Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 21, 2022 1:02:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:02:17.045Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 21, 2022 1:02:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:02:17.121Z: Starting 5 workers in us-central1-f...
    Jan 21, 2022 1:02:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:02:24.057Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 21, 2022 1:03:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:03:07.085Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 21, 2022 1:03:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:03:32.765Z: Workers have started successfully.
    Jan 21, 2022 1:03:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:03:32.799Z: Workers have started successfully.
    Jan 21, 2022 1:04:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:04:11.670Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 21, 2022 1:04:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:04:11.898Z: Cleaning up.
    Jan 21, 2022 1:04:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:04:12.027Z: Stopping worker pool...
    Jan 21, 2022 1:06:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:06:41.101Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 21, 2022 1:06:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-21T01:06:41.171Z: Worker pool stopped.
    Jan 21, 2022 1:06:47 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-20_17_02_02-8095200770289689794 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5d47add6-bd6a-4f55-b146-264ac28fccf4 and timestamp: 2022-01-21T01:06:47.843000000Z:
                     Metric:                    Value:
                   read_time                     9.549
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 21, 2022 1:06:47 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 11 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 13.557 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 22m 23s
165 actionable tasks: 142 executed, 21 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ooq57537rku2u

Build cache (/home/jenkins/.gradle/caches/build-cache-1) removing files not accessed on or after Fri Jan 14 00:44:32 UTC 2022.
Invalidating in-memory cache of /home/jenkins/.gradle/caches/journal-1/file-access.bin
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleanup deleted 670 files/directories.
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleaned up in 0.532 secs.
Stopped 10 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2937

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2937/display/redirect?page=changes>

Changes:

[Pablo Estrada] Revert "Merge pull request #15863 from [BEAM-13184] Autosharding for

[Pablo Estrada] BEAM-13611 reactivating jdbcio xlang test

[noreply] Merge pull request #16371 from [BEAM-13518][Playground] Beam Playground

[noreply] Update Java FnAPI beam master (#16572)


------------------------------------------
[...truncated 345.74 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4d1f470f40d9afe2c9ae3cd72a0aac8c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 34'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 34'
Successfully started process 'Gradle Test Executor 34'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 20, 2022 6:44:42 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 20, 2022 6:44:43 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 20, 2022 6:44:44 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 20, 2022 6:44:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 6:44:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 6:44:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 20, 2022 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 6:44:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 20, 2022 6:44:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@472702055]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 20, 2022 6:44:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 20, 2022 6:44:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 6:44:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 20, 2022 6:44:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 20, 2022 6:44:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 6:44:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 20, 2022 6:44:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2111302742]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 20, 2022 6:44:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 6:44:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 6:44:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 20, 2022 6:44:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 6:44:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 6:44:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 20, 2022 6:44:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 20, 2022 6:44:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 20, 2022 6:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 20, 2022 6:44:54 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 20, 2022 6:44:54 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 20, 2022 6:44:54 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7776539329482530416.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-tMGpUVfqgSQZUkvHv20XZBz6ymPMlo5vaATkaw-eXHg.jar
    Jan 20, 2022 6:44:55 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 20, 2022 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 20, 2022 6:44:55 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140697 bytes, hash f086ef2850ebe066b5c2d33cd4c09a05a1eab5a1e1249f6aeb0cafa2ba4087d9> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8IbvKFDr4Ga1wtM81MCaBaHqtaHhJJ9q6wyvorpAh9k.pb
    Jan 20, 2022 6:44:58 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 20, 2022 6:44:58 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 20, 2022 6:44:58 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 20, 2022 6:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 20, 2022 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-20_10_44_58-17025760781492947971?project=apache-beam-testing
    Jan 20, 2022 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-20_10_44_58-17025760781492947971
    Jan 20, 2022 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-20_10_44_58-17025760781492947971
    Jan 20, 2022 6:45:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-20T18:45:02.002Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 20, 2022 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:45:13.406Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 20, 2022 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:45:14.182Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 20, 2022 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:45:14.222Z: Expanding GroupByKey operations into optimizable parts.
    Jan 20, 2022 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:45:14.256Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 20, 2022 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:45:14.322Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 20, 2022 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:45:14.356Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 20, 2022 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:45:14.394Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 20, 2022 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:45:14.915Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 20, 2022 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:45:14.975Z: Starting 5 workers in us-central1-f...
    Jan 20, 2022 6:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:45:44.109Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 20, 2022 6:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:46:03.504Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 20, 2022 6:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:46:27.464Z: Workers have started successfully.
    Jan 20, 2022 6:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:46:27.501Z: Workers have started successfully.
    Jan 20, 2022 6:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:46:58.809Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 20, 2022 6:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:46:58.962Z: Cleaning up.
    Jan 20, 2022 6:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:46:59.044Z: Stopping worker pool...
    Jan 20, 2022 6:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:49:29.785Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 20, 2022 6:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T18:49:29.833Z: Worker pool stopped.
    Jan 20, 2022 6:49:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-20_10_44_58-17025760781492947971 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 842890e2-a242-4dcd-b1ea-17fb2a06a995 and timestamp: 2022-01-20T18:49:47.428000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      7.06

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 20, 2022 6:49:47 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 34 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 5 mins 8.757 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 26s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/eqjcdvf4ocw4y

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2936

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2936/display/redirect>

Changes:


------------------------------------------
[...truncated 352.76 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0fa20302f4217c9c4f6b40c5a3294dfc
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 20, 2022 12:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 20, 2022 12:45:26 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 20, 2022 12:45:27 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 20, 2022 12:45:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 12:45:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 12:45:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 20, 2022 12:45:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 12:45:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 12:45:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 20, 2022 12:45:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@472702055]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 20, 2022 12:45:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 20, 2022 12:45:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 12:45:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 20, 2022 12:45:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 20, 2022 12:45:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 12:45:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 20, 2022 12:45:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@411825368]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 20, 2022 12:45:33 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 12:45:33 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 12:45:33 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 20, 2022 12:45:33 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 12:45:33 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 12:45:33 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 20, 2022 12:45:33 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 20, 2022 12:45:33 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 20, 2022 12:45:33 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 20, 2022 12:45:38 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 20, 2022 12:45:38 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 20, 2022 12:45:38 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6496645314960679120.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-MzlZXsjUv7gvh9FIW2IVi-pgu-rJ9Y-lXv37CuhhXaM.jar
    Jan 20, 2022 12:45:38 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 20, 2022 12:45:39 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 20, 2022 12:45:39 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140697 bytes, hash f125a602cd4f2b0c69b3997beab036c4e7c99f1b3cc064d95dbd15226142a9ec> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8SWmAs1PKwxps5l76rA2xOfJnxs8wGTZXb0VImFCqew.pb
    Jan 20, 2022 12:45:42 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 20, 2022 12:45:42 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 20, 2022 12:45:42 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 20, 2022 12:45:42 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 20, 2022 12:45:43 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-20_04_45_42-13061785643846779467?project=apache-beam-testing
    Jan 20, 2022 12:45:43 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-20_04_45_42-13061785643846779467
    Jan 20, 2022 12:45:43 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-20_04_45_42-13061785643846779467
    Jan 20, 2022 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-20T12:45:46.385Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 20, 2022 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:45:55.142Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 20, 2022 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:45:55.847Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 20, 2022 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:45:55.944Z: Expanding GroupByKey operations into optimizable parts.
    Jan 20, 2022 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:45:56.070Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 20, 2022 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:45:56.170Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 20, 2022 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:45:56.212Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 20, 2022 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:45:56.248Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 20, 2022 12:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:45:56.780Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 20, 2022 12:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:45:56.870Z: Starting 5 workers in us-central1-a...
    Jan 20, 2022 12:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:45:58.375Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 20, 2022 12:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:46:40.653Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 20, 2022 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:47:06.275Z: Workers have started successfully.
    Jan 20, 2022 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:47:06.385Z: Workers have started successfully.
    Jan 20, 2022 12:47:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:47:36.231Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 20, 2022 12:47:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:47:36.460Z: Cleaning up.
    Jan 20, 2022 12:47:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:47:36.567Z: Stopping worker pool...
    Jan 20, 2022 12:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:49:55.300Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 20, 2022 12:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T12:49:55.353Z: Worker pool stopped.
    Jan 20, 2022 12:50:02 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-20_04_45_42-13061785643846779467 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): da69e13c-05c8-4c43-aa45-abfec60e5b2c and timestamp: 2022-01-20T12:50:02.406000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.684

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 20, 2022 12:50:02 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 41.987 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 40s
165 actionable tasks: 106 executed, 57 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/dyga3s63f54sa

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2935

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2935/display/redirect>

Changes:


------------------------------------------
[...truncated 353.80 KB...]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 20, 2022 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 20, 2022 6:45:08 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 20, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 20, 2022 6:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 6:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 20, 2022 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 20, 2022 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@472702055]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 20, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 20, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 20, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 20, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 20, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@411825368]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 20, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 20, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 20, 2022 6:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 20, 2022 6:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 20, 2022 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 20, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 20, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 20, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test9091902239154034285.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-2LY-xZCVwlvLfO_7Th4mfI-aHj8CmqWBPHbcK6dRP6k.jar
    Jan 20, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.37.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.37.0-SNAPSHOT-vPEGqxI9MKGLtuCuQz0HzI-33fdG7YkDTKUwhwE408c.jar
    Jan 20, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.37.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.37.0-SNAPSHOT-tests-PgTUab53yHZvt2XYz66dWYpw1xWQVI4VH3rxdzPsjK8.jar
    Jan 20, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/parquet/build/libs/beam-sdks-java-io-parquet-2.37.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-parquet-2.37.0-SNAPSHOT-Q2R_nNhUV0tvj1r9DvX7ZZSecSTKTDnwctGJ766dQYg.jar
    Jan 20, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/join-library/build/libs/beam-sdks-java-extensions-join-library-2.37.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-join-library-2.37.0-SNAPSHOT-xz-aSlhizJi8jigGo3amZ2g_MkglE3dBulicsi2UX5U.jar
    Jan 20, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/hadoop-common/build/libs/beam-sdks-java-io-hadoop-common-2.37.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-hadoop-common-2.37.0-SNAPSHOT-hW3lErracT-pnSMS13rV8EzLxOFgu9LbAXkGUl227k4.jar
    Jan 20, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.37.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.37.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Jan 20, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 350 files cached, 7 files newly uploaded in 1 seconds
    Jan 20, 2022 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 20, 2022 6:45:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140697 bytes, hash 023ba13fa8ff294fee292dd890f107174b7276a643e4a32616b908f4ded05f04> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-AjuhP6j_KU_uKS3YkPEHF0tydqZD5KMmFrkI9N7QXwQ.pb
    Jan 20, 2022 6:45:24 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 20, 2022 6:45:24 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 20, 2022 6:45:24 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 20, 2022 6:45:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 20, 2022 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-19_22_45_25-17774367946453335695?project=apache-beam-testing
    Jan 20, 2022 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-19_22_45_25-17774367946453335695
    Jan 20, 2022 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-19_22_45_25-17774367946453335695
    Jan 20, 2022 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-20T06:45:28.208Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 20, 2022 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:45:40.536Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 20, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:45:41.478Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 20, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:45:41.522Z: Expanding GroupByKey operations into optimizable parts.
    Jan 20, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:45:41.576Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 20, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:45:41.642Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 20, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:45:41.679Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 20, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:45:41.714Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 20, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:45:42.092Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 20, 2022 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:45:42.159Z: Starting 5 workers in us-central1-a...
    Jan 20, 2022 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:45:57.292Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 20, 2022 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:46:31.491Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 20, 2022 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:46:56.848Z: Workers have started successfully.
    Jan 20, 2022 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:46:56.951Z: Workers have started successfully.
    Jan 20, 2022 6:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:47:26.355Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 20, 2022 6:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:47:26.511Z: Cleaning up.
    Jan 20, 2022 6:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:47:26.616Z: Stopping worker pool...
    Jan 20, 2022 6:49:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:49:55.796Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 20, 2022 6:49:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T06:49:55.837Z: Worker pool stopped.
    Jan 20, 2022 6:50:01 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-19_22_45_25-17774367946453335695 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0dd97fc3-8fba-43cd-b1f6-ee34713597cf and timestamp: 2022-01-20T06:50:01.603000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.979

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 20, 2022 6:50:01 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1031 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.007 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 58.363 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 38s
165 actionable tasks: 105 executed, 58 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/e5qnmmiehmuyi

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2934

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2934/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13683] Make cross-language SQL example pipeline (#16567)

[noreply] [BEAM-13688] fixed type in BPG 4.5.3 window section (#16560)

[noreply] Remove obsolete commands from Inventory job. (#16564)

[noreply] Disable logging for memoization test. (#16556)

[noreply] Merge pull request #16472: [BEAM-13697] Add SchemaFieldNumber annotation

[noreply] Merge pull request #16373 from [BEAM-13515] [Playground] Hiding lines in


------------------------------------------
[...truncated 361.79 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0fa20302f4217c9c4f6b40c5a3294dfc
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 5'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 5'
Successfully started process 'Gradle Test Executor 5'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 20, 2022 1:02:22 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 20, 2022 1:02:22 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 20, 2022 1:02:23 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 20, 2022 1:02:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 1:02:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 1:02:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 20, 2022 1:02:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 1:02:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 1:02:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 20, 2022 1:02:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@472702055]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 20, 2022 1:02:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 20, 2022 1:02:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 1:02:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 20, 2022 1:02:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 20, 2022 1:02:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 1:02:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 20, 2022 1:02:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@411825368]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 20, 2022 1:02:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 1:02:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 1:02:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 20, 2022 1:02:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 20, 2022 1:02:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 20, 2022 1:02:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 20, 2022 1:02:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 20, 2022 1:02:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 20, 2022 1:02:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 20, 2022 1:02:34 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 20, 2022 1:02:34 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 20, 2022 1:02:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.37.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.37.0-SNAPSHOT-N-FeHc6yBmJzpxSxi6WfWPRzY_tqOBs1O8gWPEZbMms.jar
    Jan 20, 2022 1:02:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3327765737025246116.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-34OBk9RO21c_xxZfKHse2CWU3Qcnc_NMKc_kFrqhZtg.jar
    Jan 20, 2022 1:02:35 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 355 files cached, 2 files newly uploaded in 0 seconds
    Jan 20, 2022 1:02:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 20, 2022 1:02:35 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140692 bytes, hash 968cca2a67b4a68fd9e9ac2bcf6bc2dd6a6f1b4c29dddd4ae3dc7979b57ed7a2> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-lozKKme0po_Z6awrz2vC3WpvG0wp3d1K49x5ebV-16I.pb
    Jan 20, 2022 1:02:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 20, 2022 1:02:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 20, 2022 1:02:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 20, 2022 1:02:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 20, 2022 1:02:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-19_17_02_38-11050914784659122500?project=apache-beam-testing
    Jan 20, 2022 1:02:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-19_17_02_38-11050914784659122500
    Jan 20, 2022 1:02:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-19_17_02_38-11050914784659122500
    Jan 20, 2022 1:02:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-20T01:02:42.393Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 20, 2022 1:02:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:02:55.836Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 20, 2022 1:02:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:02:56.405Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 20, 2022 1:02:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:02:56.443Z: Expanding GroupByKey operations into optimizable parts.
    Jan 20, 2022 1:02:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:02:56.470Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 20, 2022 1:02:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:02:56.550Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 20, 2022 1:02:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:02:56.583Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 20, 2022 1:02:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:02:56.614Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 20, 2022 1:02:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:02:56.989Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 20, 2022 1:02:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:02:57.064Z: Starting 5 workers in us-central1-a...
    Jan 20, 2022 1:03:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:03:23.029Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 20, 2022 1:03:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:03:40.986Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 20, 2022 1:04:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:04:06.314Z: Workers have started successfully.
    Jan 20, 2022 1:04:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:04:06.341Z: Workers have started successfully.
    Jan 20, 2022 1:04:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:04:36.753Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 20, 2022 1:04:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:04:37.278Z: Cleaning up.
    Jan 20, 2022 1:04:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:04:37.366Z: Stopping worker pool...
    Jan 20, 2022 1:07:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:07:05.005Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 20, 2022 1:07:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-20T01:07:05.076Z: Worker pool stopped.
    Jan 20, 2022 1:07:11 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-19_17_02_38-11050914784659122500 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cff8c0ad-ff2d-4657-8ca8-01441a40ef6b and timestamp: 2022-01-20T01:07:11.127000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.267

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 20, 2022 1:07:11 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 53.255 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 23s
165 actionable tasks: 110 executed, 53 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/tyh6ophqqlfso

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2933

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2933/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13611] Skip test_xlang_jdbc_write (#16554)

[noreply] Merge pull request #16370 from [BEAM-13556] playground - color and

[noreply] Merge pull request #16531 from [BEAM-13567] [playground] Handle run code

[noreply] Merge pull request #16533 from [BEAM-13548] [Playground] Add example

[noreply] Merge pull request #16519 from [BEAM-13639] [Playground] Add

[noreply] Merge pull request #16518 from [BEAM-13619] [Playground] Add loading

[noreply] Merge pull request #16243 from


------------------------------------------
[...truncated 351.10 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is fcda02b54b07d34023c958ea5cfe4684
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 19, 2022 6:46:17 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 19, 2022 6:46:18 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 19, 2022 6:46:18 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 19, 2022 6:46:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 6:46:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 6:46:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 19, 2022 6:46:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 6:46:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 6:46:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 19, 2022 6:46:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@472702055]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 19, 2022 6:46:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 19, 2022 6:46:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 6:46:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 19, 2022 6:46:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 19, 2022 6:46:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 6:46:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 19, 2022 6:46:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2111302742]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 19, 2022 6:46:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 6:46:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 6:46:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 19, 2022 6:46:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 6:46:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 6:46:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 19, 2022 6:46:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 19, 2022 6:46:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 19, 2022 6:46:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 19, 2022 6:46:29 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 19, 2022 6:46:30 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 19, 2022 6:46:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7260655837097203435.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-oQfZGh59wLtGZsvwHx4HRbWlMQfWPMJL3iKN8_WDdXE.jar
    Jan 19, 2022 6:46:30 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 19, 2022 6:46:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 19, 2022 6:46:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140697 bytes, hash 11d617add2927bdaabfe9b7bb836cab44727690f97faaa1a7439d9eab0879a3b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-EdYXrdKSe9qr_pt7uDbKtEcnaQ-X-qoadDnZ6rCHmjs.pb
    Jan 19, 2022 6:46:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 19, 2022 6:46:34 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 19, 2022 6:46:34 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 19, 2022 6:46:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 19, 2022 6:46:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-19_10_46_34-2674845897714856579?project=apache-beam-testing
    Jan 19, 2022 6:46:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-19_10_46_34-2674845897714856579
    Jan 19, 2022 6:46:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-19_10_46_34-2674845897714856579
    Jan 19, 2022 6:46:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-19T18:46:37.750Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 19, 2022 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:46:50.513Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 19, 2022 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:46:51.329Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 19, 2022 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:46:51.359Z: Expanding GroupByKey operations into optimizable parts.
    Jan 19, 2022 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:46:51.397Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 19, 2022 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:46:51.519Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 19, 2022 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:46:51.545Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 19, 2022 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:46:51.588Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 19, 2022 6:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:46:52.000Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 19, 2022 6:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:46:52.096Z: Starting 5 workers in us-central1-a...
    Jan 19, 2022 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:47:03.564Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 19, 2022 6:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:47:36.994Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 19, 2022 6:48:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:48:02.680Z: Workers have started successfully.
    Jan 19, 2022 6:48:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:48:02.699Z: Workers have started successfully.
    Jan 19, 2022 6:48:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:48:29.784Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 19, 2022 6:48:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:48:29.944Z: Cleaning up.
    Jan 19, 2022 6:48:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:48:30.075Z: Stopping worker pool...
    Jan 19, 2022 6:50:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:50:55.039Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 19, 2022 6:50:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T18:50:55.107Z: Worker pool stopped.
    Jan 19, 2022 6:51:00 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-19_10_46_34-2674845897714856579 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 541b02e0-34ac-4e86-be18-f47650ebe361 and timestamp: 2022-01-19T18:51:00.642000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     5.416

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 19, 2022 6:51:00 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 47.571 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 36s
165 actionable tasks: 104 executed, 59 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/kv6jbovg2zske

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2932

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2932/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #16309: [BEAM-13503] Set a default value to

[noreply] [BEAM-13015] Provide caching statistics in the status client. (#16495)


------------------------------------------
[...truncated 395.35 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 1ca130995c1256b02ec12b53f5beca63
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 12'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 12'
Successfully started process 'Gradle Test Executor 12'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 19, 2022 12:58:08 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 19, 2022 12:58:09 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 19, 2022 12:58:10 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 19, 2022 12:58:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 12:58:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 12:58:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 19, 2022 12:58:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 12:58:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 12:58:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 19, 2022 12:58:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@472702055]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 19, 2022 12:58:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 19, 2022 12:58:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 12:58:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 19, 2022 12:58:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 19, 2022 12:58:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 12:58:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 19, 2022 12:58:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@411825368]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 19, 2022 12:58:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 12:58:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 12:58:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 19, 2022 12:58:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 12:58:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 12:58:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 19, 2022 12:58:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 19, 2022 12:58:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 19, 2022 12:58:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 19, 2022 12:58:22 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 19, 2022 12:58:22 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 19, 2022 12:58:22 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1123612835532199803.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-GBH8tWJk5SWUGn2bQBsPvWCUdxz83DhCeLyrwSwVcTA.jar
    Jan 19, 2022 12:58:22 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/core/build/libs/beam-sdks-java-core-2.37.0-SNAPSHOT-unshaded.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-core-2.37.0-SNAPSHOT-unshaded-oDdj5UW6TnmEyx2XfYQkmdVAnJrHSmM57fBxhZHA9n8.jar
    Jan 19, 2022 12:58:23 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 355 files cached, 2 files newly uploaded in 0 seconds
    Jan 19, 2022 12:58:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 19, 2022 12:58:23 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140697 bytes, hash f22151c2a31ddeadcc11605f2d834210f153b7e2d46d812848b58474fff04780> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8iFRwqMd3q3MEWBfLYNCEPFTt-LUbYEoSLWEdP_wR4A.pb
    Jan 19, 2022 12:58:26 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 19, 2022 12:58:26 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 19, 2022 12:58:26 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 19, 2022 12:58:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 19, 2022 12:58:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-19_04_58_26-9764910361905419812?project=apache-beam-testing
    Jan 19, 2022 12:58:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-19_04_58_26-9764910361905419812
    Jan 19, 2022 12:58:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-19_04_58_26-9764910361905419812
    Jan 19, 2022 12:58:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-19T12:58:30.031Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 19, 2022 12:58:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T12:58:36.935Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 19, 2022 12:58:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T12:58:37.580Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 19, 2022 12:58:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T12:58:37.623Z: Expanding GroupByKey operations into optimizable parts.
    Jan 19, 2022 12:58:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T12:58:37.643Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 19, 2022 12:58:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T12:58:37.706Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 19, 2022 12:58:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T12:58:37.733Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 19, 2022 12:58:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T12:58:37.769Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 19, 2022 12:58:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T12:58:38.114Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 19, 2022 12:58:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T12:58:38.193Z: Starting 5 workers in us-central1-f...
    Jan 19, 2022 12:59:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T12:59:01.238Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 19, 2022 12:59:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T12:59:23.763Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 19, 2022 12:59:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T12:59:51.082Z: Workers have started successfully.
    Jan 19, 2022 12:59:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T12:59:51.120Z: Workers have started successfully.
    Jan 19, 2022 1:00:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T13:00:22.733Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 19, 2022 1:00:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T13:00:22.889Z: Cleaning up.
    Jan 19, 2022 1:00:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T13:00:22.987Z: Stopping worker pool...
    Jan 19, 2022 1:02:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T13:02:53.236Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 19, 2022 1:02:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T13:02:53.294Z: Worker pool stopped.
    Jan 19, 2022 1:02:59 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-19_04_58_26-9764910361905419812 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 42c744ad-6838-44a5-b7af-f86a2adc2650 and timestamp: 2022-01-19T13:02:59.261000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.566

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 19, 2022 1:02:59 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 12 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 55.56 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 18m 36s
165 actionable tasks: 142 executed, 21 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/226345rzbkd3e

Stopped 11 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2931

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2931/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13015] Add state caching capability to be used as hint for runners


------------------------------------------
[...truncated 361.77 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 5'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 5'
Successfully started process 'Gradle Test Executor 5'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 19, 2022 6:46:35 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 19, 2022 6:46:36 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 19, 2022 6:46:37 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 19, 2022 6:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 6:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 6:46:41 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 19, 2022 6:46:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 6:46:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 6:46:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 19, 2022 6:46:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@472702055]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 19, 2022 6:46:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 19, 2022 6:46:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 6:46:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 19, 2022 6:46:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 19, 2022 6:46:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 6:46:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 19, 2022 6:46:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@411825368]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 19, 2022 6:46:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 6:46:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 6:46:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 19, 2022 6:46:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 6:46:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 6:46:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 19, 2022 6:46:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 19, 2022 6:46:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 19, 2022 6:46:44 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 19, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 19, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-k66m13ArRI0R1KftwLYwzuhK4SF6vnDCoulhkXfjtVI.jar
    Jan 19, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.37.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.37.0-SNAPSHOT-tests-WW_qh_qZ0OtxNK1HIZTckB7QnP0NV0saj8TCBpbuXbM.jar
    Jan 19, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.37.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.37.0-SNAPSHOT-tests-9waRkOC4eM0X7ABYrPq1tRErTw5OgxB_aYDH-VkPcDE.jar
    Jan 19, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2378743224616230700.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-jTBLuCUXQueWeV1o2CSKaCVfw_4palOrHKS2fU9g_yw.jar
    Jan 19, 2022 6:46:50 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 354 files cached, 3 files newly uploaded in 1 seconds
    Jan 19, 2022 6:46:50 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 19, 2022 6:46:50 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140697 bytes, hash eaebb29d40d51dff6454fc1508974e12c36e88b40cbf659450b0fef62e93725c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-6uuynUDVHf9kVPwVCJdOEsNuiLQMv2WUULD-9i6Tclw.pb
    Jan 19, 2022 6:46:53 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 19, 2022 6:46:53 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 19, 2022 6:46:53 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 19, 2022 6:46:53 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 19, 2022 6:46:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-18_22_46_54-11229792352325862362?project=apache-beam-testing
    Jan 19, 2022 6:46:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-18_22_46_54-11229792352325862362
    Jan 19, 2022 6:46:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-18_22_46_54-11229792352325862362
    Jan 19, 2022 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-19T06:46:57.338Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 19, 2022 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:47:06.536Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 19, 2022 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:47:07.188Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 19, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:47:07.225Z: Expanding GroupByKey operations into optimizable parts.
    Jan 19, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:47:07.261Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 19, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:47:07.366Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 19, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:47:07.396Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 19, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:47:07.421Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 19, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:47:07.766Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 19, 2022 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:47:07.844Z: Starting 5 workers in us-central1-a...
    Jan 19, 2022 6:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:47:26.867Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 19, 2022 6:47:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:47:52.307Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 19, 2022 6:48:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:48:18.336Z: Workers have started successfully.
    Jan 19, 2022 6:48:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:48:18.366Z: Workers have started successfully.
    Jan 19, 2022 6:48:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:48:43.923Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 19, 2022 6:48:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:48:44.100Z: Cleaning up.
    Jan 19, 2022 6:48:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:48:44.243Z: Stopping worker pool...
    Jan 19, 2022 6:51:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:51:03.332Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 19, 2022 6:51:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T06:51:03.405Z: Worker pool stopped.
    Jan 19, 2022 6:51:10 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-18_22_46_54-11229792352325862362 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5d01161f-3a24-44d2-8808-0c7f7694a0da and timestamp: 2022-01-19T06:51:10.659000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.878

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 19, 2022 6:51:10 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 38.886 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 48s
165 actionable tasks: 110 executed, 53 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/stebhmsvjl3jm

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2930

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2930/display/redirect?page=changes>

Changes:

[alexander.zhuravlev] [BEAM-13680] Fixed code_repository (added pipelineUuid to RunCodeResult

[noreply] [BEAM-13616][BEAM-13646] Upgrade vendored calcite to 1.28.0:0.2 (#16544)

[noreply] Merge pull request #16486 from [BEAM-13544][Playground] Add logs to

[noreply] [BEAM-13683] Correct SQL transform schema, fix expansion address

[noreply] Update walkthrough.md (#16512)

[noreply] [BEAM-11808][BEAM-9879] Support aggregate functions with two arguments

[noreply] Merge pull request #16506 from [BEAM-13652][Playground] Send examples'

[noreply] Merge pull request #16322 from [BEAM-13407] [Playground] Preload fonts

[noreply] [BEAM-13665] Make SpannerIO projectID optional again (#16547)


------------------------------------------
[...truncated 355.27 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 19, 2022 12:57:27 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 19, 2022 12:57:28 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 19, 2022 12:57:29 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 19, 2022 12:57:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 12:57:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 12:57:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 19, 2022 12:57:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 12:57:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 12:57:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 19, 2022 12:57:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@472702055]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 19, 2022 12:57:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 19, 2022 12:57:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 12:57:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 19, 2022 12:57:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 19, 2022 12:57:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 12:57:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 19, 2022 12:57:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2111302742]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 19, 2022 12:57:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 12:57:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 12:57:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 19, 2022 12:57:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 19, 2022 12:57:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 19, 2022 12:57:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 19, 2022 12:57:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 19, 2022 12:57:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 19, 2022 12:57:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 19, 2022 12:57:40 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 19, 2022 12:57:40 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-dvUz6Lw1ruWywmb16T26nihRmNSCkzJeB0QPX7BBslo.jar
    Jan 19, 2022 12:57:40 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.37.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.37.0-SNAPSHOT-HQENi2du-l0lArfj2kwBNPXpIEcARbs4fAVboofr6uU.jar
    Jan 19, 2022 12:57:40 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3558681368941080233.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-UvS6FdmpXRHLb0_10VwOfjNAHQivFgKWgH0UTnBiE3E.jar
    Jan 19, 2022 12:57:40 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_28_0/0.2/b8ec320b972b575ab37767bf8d4cfadff1fe304a/beam-vendor-calcite-1_28_0-0.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_28_0-0.2-pvjNvR5NntriHz0ja9OKytRXl6tOlifYWKUI0wMGtVo.jar
    Jan 19, 2022 12:57:42 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 354 files cached, 3 files newly uploaded in 1 seconds
    Jan 19, 2022 12:57:42 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 19, 2022 12:57:42 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140665 bytes, hash 3074beb4bc840a8637a3e9f88590f92b9b9d642dabf8e4b5185849419ab746b8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-MHS-tLyECoY3o-n4hZD5K5udZC2r-OS1GFhJQZq3Rrg.pb
    Jan 19, 2022 12:57:45 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 19, 2022 12:57:45 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 19, 2022 12:57:45 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 19, 2022 12:57:45 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 19, 2022 12:57:46 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-18_16_57_45-3429559469246529717?project=apache-beam-testing
    Jan 19, 2022 12:57:46 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-18_16_57_45-3429559469246529717
    Jan 19, 2022 12:57:46 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-18_16_57_45-3429559469246529717
    Jan 19, 2022 12:57:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-19T00:57:49.324Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 19, 2022 12:57:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:57:58.958Z: Worker configuration: e2-standard-2 in us-central1-b.
    Jan 19, 2022 12:58:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:57:59.527Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 19, 2022 12:58:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:57:59.571Z: Expanding GroupByKey operations into optimizable parts.
    Jan 19, 2022 12:58:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:57:59.599Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 19, 2022 12:58:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:57:59.672Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 19, 2022 12:58:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:57:59.699Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 19, 2022 12:58:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:57:59.744Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 19, 2022 12:58:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:58:00.086Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 19, 2022 12:58:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:58:00.190Z: Starting 5 workers in us-central1-b...
    Jan 19, 2022 12:58:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:58:11.075Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 19, 2022 12:58:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:58:51.604Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 19, 2022 12:59:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:59:18.859Z: Workers have started successfully.
    Jan 19, 2022 12:59:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:59:18.878Z: Workers have started successfully.
    Jan 19, 2022 12:59:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:59:50.133Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 19, 2022 12:59:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:59:50.290Z: Cleaning up.
    Jan 19, 2022 12:59:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T00:59:50.426Z: Stopping worker pool...
    Jan 19, 2022 1:02:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T01:02:07.618Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 19, 2022 1:02:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-19T01:02:07.708Z: Worker pool stopped.
    Jan 19, 2022 1:02:14 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-18_16_57_45-3429559469246529717 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a218e63e-449c-462a-a213-4e0635e98f2c and timestamp: 2022-01-19T01:02:14.698000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.617

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 19, 2022 1:02:14 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 51.473 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 6s
165 actionable tasks: 104 executed, 59 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/edbgnepsx7zhc

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2929

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2929/display/redirect?page=changes>

Changes:

[Robert Bradshaw] Bump beam container version.

[Robert Bradshaw] Also bump FnAPI container.

[noreply] [BEAM-13616][BEAM-13645] Switch to vendored grpc 1.43.2 (#16543)


------------------------------------------
[...truncated 349.68 KB...]
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 18, 2022 6:53:25 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 18, 2022 6:53:26 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 18, 2022 6:53:27 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 18, 2022 6:53:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 18, 2022 6:53:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 6:53:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 18, 2022 6:53:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 18, 2022 6:53:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 6:53:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 18, 2022 6:53:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@429393578]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 18, 2022 6:53:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 18, 2022 6:53:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 6:53:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 18, 2022 6:53:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 18, 2022 6:53:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 6:53:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 18, 2022 6:53:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1264231563]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 18, 2022 6:53:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 18, 2022 6:53:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 6:53:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 18, 2022 6:53:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 18, 2022 6:53:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 6:53:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 18, 2022 6:53:33 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 18, 2022 6:53:33 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 18, 2022 6:53:33 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 18, 2022 6:53:38 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 18, 2022 6:53:38 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-dvUz6Lw1ruWywmb16T26nihRmNSCkzJeB0QPX7BBslo.jar
    Jan 18, 2022 6:53:38 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3318468761243685688.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-oDLaRyPqdhfqyfqUT5tNDOuV4-7xYmqRHwGplgDyjm4.jar
    Jan 18, 2022 6:53:38 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.14.1/85348a0c44c298bbec5ae747e67ae12e60b3aef6/libthrift-0.14.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/libthrift-0.14.1-WzUQ_nLm8HJeKc7269seq6zMxp15_E7VC2gWAKh2Z-w.jar
    Jan 18, 2022 6:53:38 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/8.5.46/5d686394334d143f48251827435ab086a161e75e/tomcat-embed-core-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-embed-core-8.5.46-vl-FREjS7l1uADb-srT3ExYweaG2uaepdQjlWRetNcI.jar
    Jan 18, 2022 6:53:38 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat/tomcat-annotations-api/8.5.46/56c67699de192c603afd6f029e80e5ff8d98e7e9/tomcat-annotations-api-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-annotations-api-8.5.46-amtG0OaVhkRRTAyjZYs7B-YSOmgqIO4203lSQnNfq8M.jar
    Jan 18, 2022 6:53:39 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 353 files cached, 4 files newly uploaded in 0 seconds
    Jan 18, 2022 6:53:39 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 18, 2022 6:53:39 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140665 bytes, hash f9e2867cee0ca15b5a08bf13217a97c2834dde427aa2bb49f701a3cf122fc355> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline--eKGfO4MoVtaCL8TIXqXwoNN3kJ6ortJ9wGjzxIvw1U.pb
    Jan 18, 2022 6:53:42 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 18, 2022 6:53:42 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 18, 2022 6:53:42 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 18, 2022 6:53:42 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 18, 2022 6:53:43 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-18_10_53_42-4319598140390684500?project=apache-beam-testing
    Jan 18, 2022 6:53:43 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-18_10_53_42-4319598140390684500
    Jan 18, 2022 6:53:43 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-18_10_53_42-4319598140390684500
    Jan 18, 2022 6:53:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-18T18:53:45.971Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 18, 2022 6:53:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:53:53.216Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 18, 2022 6:53:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:53:53.950Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 18, 2022 6:53:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:53:53.996Z: Expanding GroupByKey operations into optimizable parts.
    Jan 18, 2022 6:53:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:53:54.016Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 18, 2022 6:53:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:53:54.083Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 18, 2022 6:53:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:53:54.134Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 18, 2022 6:53:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:53:54.181Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 18, 2022 6:53:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:53:54.682Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 18, 2022 6:53:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:53:54.751Z: Starting 5 workers in us-central1-a...
    Jan 18, 2022 6:54:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:54:07.250Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 18, 2022 6:54:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:54:46.000Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 18, 2022 6:55:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:55:11.798Z: Workers have started successfully.
    Jan 18, 2022 6:55:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:55:11.835Z: Workers have started successfully.
    Jan 18, 2022 6:55:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:55:42.072Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 18, 2022 6:55:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:55:42.220Z: Cleaning up.
    Jan 18, 2022 6:55:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:55:42.287Z: Stopping worker pool...
    Jan 18, 2022 6:58:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:58:13.615Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 18, 2022 6:58:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T18:58:13.657Z: Worker pool stopped.
    Jan 18, 2022 6:58:20 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-18_10_53_42-4319598140390684500 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 92f07e90-14ed-480d-b1c0-8ff68070d41f and timestamp: 2022-01-18T18:58:20.081000000Z:
                     Metric:                    Value:
                   read_time                     7.249
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 18, 2022 6:58:20 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 58.353 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 44s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ikom52aeycdzo

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2928

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2928/display/redirect?page=changes>

Changes:

[mmack] [BEAM-8806] Integration test for SqsIO

[mmack] [BEAM-13631] Add deterministic SQS message coder to fix reading from SQS


------------------------------------------
[...truncated 358.05 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is d2824585f14717d3ba98d12608e8b5f3
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 18, 2022 12:50:40 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 18, 2022 12:50:41 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 18, 2022 12:50:42 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 18, 2022 12:50:44 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 18, 2022 12:50:44 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 12:50:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 18, 2022 12:50:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 18, 2022 12:50:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 12:50:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 18, 2022 12:50:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1823577333]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 18, 2022 12:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 18, 2022 12:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 12:50:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 18, 2022 12:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 18, 2022 12:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 12:50:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 18, 2022 12:50:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1543664352]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 18, 2022 12:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 18, 2022 12:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 12:50:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 18, 2022 12:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 18, 2022 12:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 12:50:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 18, 2022 12:50:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 18, 2022 12:50:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 18, 2022 12:50:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 18, 2022 12:50:52 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 18, 2022 12:50:52 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 18, 2022 12:50:52 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4661967899884024404.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-864wPL_eCVtLHG5rF1vWUUh9JXitoYbUKqHLxJoN0a8.jar
    Jan 18, 2022 12:50:53 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 18, 2022 12:50:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 18, 2022 12:50:53 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140669 bytes, hash d390cf4bb40e414cf78d06612794260af1131912a8dd30e52978220ef89223c6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-05DPS7QOQUz3jQZhJ5QmCvETGRKo3TDlKXgiDviSI8Y.pb
    Jan 18, 2022 12:50:56 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 18, 2022 12:50:56 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 18, 2022 12:50:56 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 18, 2022 12:50:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 18, 2022 12:50:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-18_04_50_57-5674712791122215964?project=apache-beam-testing
    Jan 18, 2022 12:50:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-18_04_50_57-5674712791122215964
    Jan 18, 2022 12:50:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-18_04_50_57-5674712791122215964
    Jan 18, 2022 12:51:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-18T12:51:00.302Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 18, 2022 12:51:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:51:07.650Z: Worker configuration: e2-standard-2 in us-central1-b.
    Jan 18, 2022 12:51:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:51:08.394Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 18, 2022 12:51:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:51:08.440Z: Expanding GroupByKey operations into optimizable parts.
    Jan 18, 2022 12:51:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:51:08.483Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 18, 2022 12:51:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:51:08.547Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 18, 2022 12:51:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:51:08.575Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 18, 2022 12:51:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:51:08.600Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 18, 2022 12:51:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:51:08.910Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 18, 2022 12:51:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:51:08.976Z: Starting 5 workers in us-central1-b...
    Jan 18, 2022 12:51:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:51:28.085Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 18, 2022 12:52:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:52:01.641Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 18, 2022 12:52:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:52:27.147Z: Workers have started successfully.
    Jan 18, 2022 12:52:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:52:27.173Z: Workers have started successfully.
    Jan 18, 2022 12:52:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:52:56.898Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 18, 2022 12:52:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:52:57.018Z: Cleaning up.
    Jan 18, 2022 12:52:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:52:57.079Z: Stopping worker pool...
    Jan 18, 2022 12:55:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:55:31.408Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 18, 2022 12:55:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T12:55:31.448Z: Worker pool stopped.
    Jan 18, 2022 12:55:36 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-18_04_50_57-5674712791122215964 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5afe18d2-eb11-416c-a023-00b70ce2c5ba and timestamp: 2022-01-18T12:55:37.047000000Z:
                     Metric:                    Value:
                   read_time                     7.533
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 18, 2022 12:55:37 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.048 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 5 mins 0.456 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 56s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/clco7g2eai57y

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2927

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2927/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13616] Update com.google.cloud:libraries-bom to 24.2.0 (#16509)


------------------------------------------
[...truncated 366.37 KB...]
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 18, 2022 6:50:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 18, 2022 6:50:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 18, 2022 6:50:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7049229210061119140.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-77Th93iYys2pdRlA-OEbf7GLOn-D7uhLKnMKYGUtGKs.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-spanner/6.17.4/9a7337eb485c2dd787c7b9c1aa674f5a9488e3f6/google-cloud-spanner-6.17.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-spanner-6.17.4-NvG_91cO2RfpEp9bgh1dl-N3SCcaKXMAoFmtVb0nBzY.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-firestore/3.0.10/e136459d4f4a4665fe24635a29a94e5fe6146f13/google-cloud-firestore-3.0.10.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-firestore-3.0.10-qSEb8ZNEhQ3-CDYLufbjjSy4FfbW1akGiX24ahm6GYo.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-xds/1.43.2/f5e1e4e2b112a442e71d8b41870d701b94d6e6b3/grpc-xds-1.43.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-xds-1.43.2-kKn43HRk1UugsWx9rlkl1HNk2-FI4jNqj4Buj7Ew1O0.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-core-grpc/2.3.5/cd6952f6932149e20ba76705e83ad31a62a543c0/google-cloud-core-grpc-2.3.5.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-core-grpc-2.3.5-B96u4rGD4LAKp5JAJpgj85yRferYF5dBcL-Mw6Usai4.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-core-http/2.3.5/f4e707942bbf52b340ffcb031c474e80be8aebcb/google-cloud-core-http-2.3.5.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-core-http-2.3.5-Y45l7ITliE_A1fdWRahYmo28d_TzhdCIMmZnyMZGl5Q.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigtable/2.5.1/65a1d2dbb40cdbf123f0a821f0e4b7baf5934c20/google-cloud-bigtable-2.5.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigtable-2.5.1-Kzho6kSL6dq8D488o-XeAT9oR5Q4Rqd25NU-BloA_xY.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-pubsublite/1.4.8/29ac80c96750e528a04540f26b2cdd7ff86f9c39/google-cloud-pubsublite-1.4.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-pubsublite-1.4.8--F3qoUNuO0WWj7a5lSIBgyNbHrTraP_RHjxyL_qA97k.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api/gax-httpjson/0.93.1/a9e83876b48a4b2e3d0b6b98757ea82bfb425992/gax-httpjson-0.93.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gax-httpjson-0.93.1-3mLEOpLK4mWYf0pPkjWVDpyOP79XBj5eeTAsdJ1gU7g.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-pubsub/1.115.1/9e52842d6a8afd586b7fbb6a234bfe29ef84581f/google-cloud-pubsub-1.115.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-pubsub-1.115.1-132-_l_R3uufJ-0-LL4KKrIxPND0PJST8kFL96R_9L0.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigquery/2.6.1/57c082185405d2516b1156b5e4bd3870abac94e/google-cloud-bigquery-2.6.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigquery-2.6.1-fOiuGw_77016lF9ncjVbT4fpmDsMzUAwOhoWitw12W8.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.protobuf/protobuf-java-util/3.19.2/fd73ae9d9f8121eca362d54279f73bb56219a594/protobuf-java-util-3.19.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/protobuf-java-util-3.19.2-lYjWfbORsZz_tOxljZNlrFNisgiWYcFX_SfyE4DN0eg.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api/gax-grpc/2.8.1/a05a5132300512c128cecf57ac5b784153131c61/gax-grpc-2.8.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gax-grpc-2.8.1-Oaqma5KQUeKekzhWvuQ9BCsF3Uf1NF_DCF0Um3fxw3s.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-alts/1.43.2/d455472c0e92b618be145f9fe96847b7b49ad733/grpc-alts-1.43.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-alts-1.43.2-AQTUHJA58-b5EWFsOF-bWHqlwMg1n-WRWQVGoPBU6j8.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-core/2.3.5/a3dfdc439c0660c42dad7355edc6fc0f2d2eca77/google-cloud-core-2.3.5.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-core-2.3.5-C35jAguM6WqLA8gwfDr535FlX0WLgOzR6mahoKXWWLA.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-services/1.43.2/640afe074fbd01203b9805528d43d7005d8b7f84/grpc-services-1.43.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-services-1.43.2-J1rDyujv-30H9tVmfZ-2RATDInUnY06po-EeL_uQ7JI.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-grpclb/1.43.2/640d55674e8393a95c5b7e872369f27eb4cb0f52/grpc-grpclb-1.43.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-grpclb-1.43.2-Gbzwbzna-Rp0yvVQGsGt6OaDykwvnSJY7KCoz22lVp4.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-bigtable-admin-v2/2.5.1/55f41343aac0544bae194f1c5556c21e4405a4a9/grpc-google-cloud-bigtable-admin-v2-2.5.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-bigtable-admin-v2-2.5.1-ob0IcfQ8PBD6xAXtwjOQfKjtH9hNSB-WE_aIXPoucyc.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigquerystorage/2.8.1/898b2a7e74d8426a818db7fe60f38add6e4138f1/google-cloud-bigquerystorage-2.8.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigquerystorage-2.8.1-k2Xhnag6oOqx7ZdRP-MyG5GPog-y-A_UWbgWIWPeXuk.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-bigquerystorage-v1/2.8.1/363411af50c6ef14bcff071d63171825e8ebe1b8/proto-google-cloud-bigquerystorage-v1-2.8.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-bigquerystorage-v1-2.8.1-jTgogla2D6ibCK3nRPJKzSca8LWozLH4zbdDpmpar8E.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-pubsublite-v1/1.4.8/be7bb301b8c88aabed8b69664f9932dcbacc886f/grpc-google-cloud-pubsublite-v1-1.4.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-pubsublite-v1-1.4.8-6u9Zka-0VlWWQ8j_SiOMFT5CF_zhg8qgua4IA1mC8Jk.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-bigtable-v2/2.5.1/d52b57f264bf14a6e56ecae1e69a53db4ebff814/proto-google-cloud-bigtable-v2-2.5.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-bigtable-v2-2.5.1-x9CPNrUzBwlAdnewS0kU1fEE9XV_YPHtwpi00MwzHaI.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-pubsub-v1/1.97.1/42001cb9dc3afe0720a3963a17e9fef3baa498b0/proto-google-cloud-pubsub-v1-1.97.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-pubsub-v1-1.97.1-ApJ8HbEmd_jT8_yVYthoJGnUGYsWyvSJHd5WNGR8YEY.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-pubsub-v1/1.97.1/dca75235263df5590c78a0ef6b3acc2d24fcd8f7/grpc-google-cloud-pubsub-v1-1.97.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-pubsub-v1-1.97.1-H957ISBI6FO7A_4RkmLEVjdK98uI49E5DeQxQbugGcw.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-spanner-v1/6.17.4/89cb35be4e34d78516b60ad1c3d28c9431dc7714/proto-google-cloud-spanner-v1-6.17.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-spanner-v1-6.17.4-ovE6oaSfezt09Fl4N6n-ODKswROe8mmcPiOoiIj_gAE.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-bigtable-admin-v2/2.5.1/ede4cc6479e9814e02c8c54779fe6ce291df37fd/proto-google-cloud-bigtable-admin-v2-2.5.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-bigtable-admin-v2-2.5.1-26OpbPNOGR6-7mwVeAm_4jfeHrS7pBG7Jn3v7ouuC9A.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-pubsublite-v1/1.4.8/a63b477aeb1f0c65c53d43eb6a302f1dab63657f/proto-google-cloud-pubsublite-v1-1.4.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-pubsublite-v1-1.4.8-YW67pFEZvqX-1FcBIBVkwx7kGCBHeYcU2jhald79BWU.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-bigtable-v2/2.5.1/b9fb1aa41ae3d058234d61f814138121b42f64cd/grpc-google-cloud-bigtable-v2-2.5.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-bigtable-v2-2.5.1-Barc4ghZtpoVZrmqklg5V7kITHoh7b5bLCDv4vzRCgE.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api/gax/2.8.1/c99dcb923675ae3e2fe8eb66f736e80f89c66231/gax-2.8.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gax-2.8.1-gdVSB49hbgrxl95vGYBsuziCNSVzG7BDBEwIzMoWuyY.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-firestore-v1/3.0.10/8cd44e1c2844e73cc000b18f1e7739059da8ee79/proto-google-cloud-firestore-v1-3.0.10.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-firestore-v1-3.0.10-N1HdROIC0ZXbQNmY_WczeM8uoF9-m__BXZv7HwCGjP0.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-spanner-admin-database-v1/6.17.4/28ddfe517bb5286782c13210496841d5788d64e4/proto-google-cloud-spanner-admin-database-v1-6.17.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-spanner-admin-database-v1-6.17.4-rQXf44U6uxl9Yw6FkDCwXKe0ucBiXx54ycQclHnptCU.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.http-client/google-http-client-jackson2/1.41.0/768627d6e8e55b6ec499163b5e283e26d1b049b5/google-http-client-jackson2-1.41.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-http-client-jackson2-1.41.0-V9mrXIJqyvqAf3_4hOHTrQw1KiiowE1w4Udb1ePsAv0.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.http-client/google-http-client-apache-v2/1.41.0/d3edd9b035d8407a359c3395a5a7f1b3d1e534e/google-http-client-apache-v2-1.41.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-http-client-apache-v2-1.41.0-tHwDhZ4_mUq9p7xSUbjRYGoNYO9Qdht9OkjgafPbo7Q.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.http-client/google-http-client-gson/1.41.0/1a754a5dd672218a2ac667d7ff2b28df7a5a240e/google-http-client-gson-1.41.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-http-client-gson-1.41.0-GHZtG76202N2PvclvILtPIxF-I6tvxSZZeboyNm3DZ8.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.http-client/google-http-client-protobuf/1.41.0/f2bebf991ab80173cafca6594e4cb0f405f25096/google-http-client-protobuf-1.41.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-http-client-protobuf-1.41.0-jXTVhevPhDPzVEQgm7J_-ckW4zoCStSitc2hNf_4Mhc.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.http-client/google-http-client-appengine/1.41.0/9944400b0dfb804cb883bf8d989e1fb0232e9052/google-http-client-appengine-1.41.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-http-client-appengine-1.41.0-mRx3u0M79PmtRa72vXLx2RL75tBr-SOqGily-EazHgY.jar
    Jan 18, 2022 6:50:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.http-client/google-http-client/1.41.0/6334215e876a7bda17072910e16b150868a7018b/google-http-client-1.41.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-http-client-1.41.0-SkLSw4M3O_YceY8YgnRZibmDa7AyqKQBmJDsts2QPvM.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-auth/1.43.2/eb09273098bfd761f6daafa50be5b85b16c73cae/grpc-auth-1.43.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-auth-1.43.2-HmWTaJw6KoM2dwi0NyqtFEPmdkFNPLR32NwBKgijgY4.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-netty-shaded/1.43.2/e4399db174148350bcd0462aba267b4cdef63f03/grpc-netty-shaded-1.43.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-netty-shaded-1.43.2-NdilC-pb9dyW7JKG6O1M8Cw6tNlOKevoMBe0r2oGXjQ.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-core/1.43.2/c65caaa4e04c7058b87fa85b7d740a26db1f174e/grpc-core-1.43.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-core-1.43.2-ZHitP_wfLGOnYkqs-Csnb088TXNdjIWhGoUUUifY-_U.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-netty/1.43.2/f07d3c6feb52ad61e31fb26d6a7d05d3f3a4fb2c/grpc-netty-1.43.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-netty-1.43.2-fpRYxxrxoCQjKZgLF-lpy74DtxdwsaIBXM_AqqCZdZk.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-stub/1.43.2/a1d1db8180c7e3579e5401881184cf335bd7bd07/grpc-stub-1.43.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-stub-1.43.2-FTl8LJT4zbZM6mJXt_8n_t683O6m_AQtuo029Vqyne4.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-bigquerystorage-v1beta1/0.132.1/9319aa86a8e9bb9e6d1e4d49642bb0ac78965cde/grpc-google-cloud-bigquerystorage-v1beta1-0.132.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-bigquerystorage-v1beta1-0.132.1-evuDFcY3MMwGvo4i7WfgAfdnH15LRGSQS4u6t70_rVc.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-protobuf/1.43.2/3137da7dfb3c9f676375b8c5216f33221cdc100a/grpc-protobuf-1.43.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-protobuf-1.43.2-wD9Cdaeuaq0v25TY1WugRLqzg2Zu6r9Rc2IdgsB6SmI.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-bigquerystorage-v1beta2/0.132.1/39afe71d9f36bf38a5c5492240f2120d43b3bf23/grpc-google-cloud-bigquerystorage-v1beta2-0.132.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-bigquerystorage-v1beta2-0.132.1-nY6YD1b6ubs_aBHMQUv8ZyfZB2dpR5_FBfa4RMve_-U.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-bigquerystorage-v1/2.8.1/58c5557dbc2ade886629e65a5da265c0da634e44/grpc-google-cloud-bigquerystorage-v1-2.8.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-bigquerystorage-v1-2.8.1-wv5JwOd_pSwIqHrauHI6s4o6K_qW6MnRVlXyevVdzSY.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-spanner-v1/6.17.4/62782cd78be3016ed9ace86f39bcf76c7d60ce1/grpc-google-cloud-spanner-v1-6.17.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-spanner-v1-6.17.4-HktnHei9X_qDhFj7cFzZT3Bg4l_0FKqJBBnClM53XvY.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-spanner-admin-instance-v1/6.17.4/147be724a518589d9448b2a5f7dedadd4871b94/grpc-google-cloud-spanner-admin-instance-v1-6.17.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-spanner-admin-instance-v1-6.17.4-umiFixWfWG4I2IBD9ygM9PLZDriUTWdF8cLMXSLzWKI.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-cloud-spanner-admin-database-v1/6.17.4/99543c6918d601df0ea0b149551c672c7c05e19b/grpc-google-cloud-spanner-admin-database-v1-6.17.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-cloud-spanner-admin-database-v1-6.17.4-EHaP9atUnbXYrI13AVGWEJ_SnS4KcQsT4XKaeylPJEs.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-api/1.43.2/27037d34c59e0514f9c6c5a90c2e1c903c9f67e9/grpc-api-1.43.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-api-1.43.2-l_XPIFcg7lDBiUHz7URhXw6-SIOfAYOAWpeAdbdsxIc.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-protobuf-lite/1.43.2/63b433515810299ed3fb8fe375b10eb1ac71dd17/grpc-protobuf-lite-1.43.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-protobuf-lite-1.43.2-qNzODgy3dm5NEm0rQyiAWKRyIy8zuTtwgW5XhPS-nrQ.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api/api-common/2.1.2/eff20f72b42843d48d01c19099d4a22e8da773fa/api-common-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/api-common-2.1.2-pCljgjnlQ9B9z9RD1QlIkMbDo-olku-O45McIX1969k.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/proto-google-cloud-firestore-bundle-v1/3.0.10/6077db1bc30e8b1d30007c22675b5ae4f6530e51/proto-google-cloud-firestore-bundle-v1-3.0.10.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-firestore-bundle-v1-3.0.10-xZM3LjVy3uPCNoswpMbWc-DFavmKuTQZiQlZzWv0oJs.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-bigquerystorage-v1beta1/0.132.1/b8dc53a717f223bd30baa167d3d2fd22ce1f9482/proto-google-cloud-bigquerystorage-v1beta1-0.132.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-bigquerystorage-v1beta1-0.132.1-_ADrk7EMdsRx6X-UK8KthRJQozmjXUqJCbYVLoynDQU.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-spanner-admin-instance-v1/6.17.4/741cdbb20e8a349191f2bc890792ae73d46ee958/proto-google-cloud-spanner-admin-instance-v1-6.17.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-spanner-admin-instance-v1-6.17.4-BVeImmd9A_Gq3x6wWWYiCCOuzjWqX4jrb1JZWNi12UA.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-context/1.43.2/cc6fd89fb5adbae97e7724d4f4f7fbed71018163/grpc-context-1.43.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-context-1.43.2-LVkiu6JXYHgSiOztJXtCZy2q7VGh0hVLrAt4qN7TSd8.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-bigquerystorage-v1beta2/0.132.1/a2f8a3f54a3657f941a8fa270002cf0ec73f0c82/proto-google-cloud-bigquerystorage-v1beta2-0.132.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-bigquerystorage-v1beta2-0.132.1-0JmvdyPBNApI1OW-G0MsAA46N384NzKdwaRWagUJ08w.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-cloud-datastore-v1/0.93.2/f0e0c7898334274fa7e794364c433ec82285857a/proto-google-cloud-datastore-v1-0.93.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-cloud-datastore-v1-0.93.2-pEF0MS2d2kq_sajAAk-9bw4PRV_GsMhnE3e-kYVr9vM.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-iam-v1/1.2.0/4ac276c6fc59c48ea916514bfcfaec66b8d607b4/proto-google-iam-v1-1.2.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-iam-v1-1.2.0-HuLkiySJhZblfnnzSMUHVQBpw-Gmeq2gTa_D-yc3aeQ.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/proto-google-common-protos/2.7.1/f0f101045d738992ec8a3956f0cce9ce49a76fe1/proto-google-common-protos-2.7.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/proto-google-common-protos-2.7.1-2uzBRrVaCYyRmf5VWxhHYyWs96tkD6m1nBzdY0RGno8.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.protobuf/protobuf-java/3.19.2/e958ce38f96b612d3819ff1c753d4d70609aea74/protobuf-java-3.19.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/protobuf-java-3.19.2-NEbL-hN5Uii8ZUm5GkCfJ832kT0cjwPg-ZVymIYjoEs.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.api.grpc/grpc-google-common-protos/2.7.1/f807d1d3498ae17f6e5756735c4b63e17c75b61d/grpc-google-common-protos-2.7.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/grpc-google-common-protos-2.7.1-FmSjhrkerwanLjUt1fuXtZV9otp1Wwz5ftdijYe8O4c.jar
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 295 files cached, 62 files newly uploaded in 1 seconds
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 18, 2022 6:50:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140667 bytes, hash eae1f3771d30b41f3094fa439d659902948f28036e4bc50546c792e2e78a83d9> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-6uHzdx0wtB8wlPpDnWWZApSPKANuS8UFRseS4ueKg9k.pb
    Jan 18, 2022 6:50:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 18, 2022 6:50:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 18, 2022 6:50:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 18, 2022 6:50:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 18, 2022 6:50:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-17_22_50_13-6326213112225646794?project=apache-beam-testing
    Jan 18, 2022 6:50:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-17_22_50_13-6326213112225646794
    Jan 18, 2022 6:50:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-17_22_50_13-6326213112225646794
    Jan 18, 2022 6:50:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-18T06:50:16.443Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 18, 2022 6:50:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:50:23.585Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 18, 2022 6:50:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:50:24.184Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 18, 2022 6:50:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:50:24.223Z: Expanding GroupByKey operations into optimizable parts.
    Jan 18, 2022 6:50:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:50:24.244Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 18, 2022 6:50:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:50:24.313Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 18, 2022 6:50:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:50:24.340Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 18, 2022 6:50:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:50:24.398Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 18, 2022 6:50:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:50:24.720Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 18, 2022 6:50:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:50:24.797Z: Starting 5 workers in us-central1-f...
    Jan 18, 2022 6:50:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:50:35.511Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 18, 2022 6:51:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:51:10.426Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 18, 2022 6:51:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:51:36.045Z: Workers have started successfully.
    Jan 18, 2022 6:51:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:51:36.077Z: Workers have started successfully.
    Jan 18, 2022 6:52:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:52:10.244Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 18, 2022 6:52:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:52:10.385Z: Cleaning up.
    Jan 18, 2022 6:52:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:52:10.543Z: Stopping worker pool...
    Jan 18, 2022 6:54:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:54:38.702Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 18, 2022 6:54:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T06:54:38.745Z: Worker pool stopped.
    Jan 18, 2022 6:54:45 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-17_22_50_13-6326213112225646794 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4c30e9cd-b385-4ec5-b997-c0b25225131c and timestamp: 2022-01-18T06:54:45.530000000Z:
                     Metric:                    Value:
                   read_time                    10.607
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 18, 2022 6:54:45 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 40 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.009 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 55.307 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 40s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/5bp3qfgeemqas

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2926

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2926/display/redirect?page=changes>

Changes:

[aydar.zaynutdinov] [BEAM-13641][Playground]

[noreply] [BEAM-13430] Remove jcenter which will no longer contain any updates.


------------------------------------------
[...truncated 367.71 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 6'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 6'
Successfully started process 'Gradle Test Executor 6'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 18, 2022 12:46:57 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 18, 2022 12:46:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 18, 2022 12:46:59 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 18, 2022 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 18, 2022 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 12:47:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 18, 2022 12:47:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 18, 2022 12:47:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 12:47:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 18, 2022 12:47:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@167018904]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 18, 2022 12:47:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 18, 2022 12:47:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 12:47:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 18, 2022 12:47:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 18, 2022 12:47:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 12:47:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 18, 2022 12:47:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2102730438]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 18, 2022 12:47:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 18, 2022 12:47:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 12:47:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 18, 2022 12:47:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 18, 2022 12:47:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 18, 2022 12:47:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 18, 2022 12:47:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 18, 2022 12:47:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 18, 2022 12:47:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 18, 2022 12:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 18, 2022 12:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 18, 2022 12:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2930655324667291472.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-IdS73ItgInPmQjbuCR1noEiZqNwdSrQTFDf6txROGI4.jar
    Jan 18, 2022 12:47:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 18, 2022 12:47:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 18, 2022 12:47:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash e1dd738e869e0b6775f3d9eefc55a3584acedc6cc1d6a83e561f848f4f13cb9d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-4d1zjoaeC2d189nu_FWjWErO3GzB1qg-Vh-Ej08Ty50.pb
    Jan 18, 2022 12:47:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 18, 2022 12:47:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 18, 2022 12:47:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 18, 2022 12:47:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 18, 2022 12:47:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-17_16_47_14-862523260568416772?project=apache-beam-testing
    Jan 18, 2022 12:47:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-17_16_47_14-862523260568416772
    Jan 18, 2022 12:47:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-17_16_47_14-862523260568416772
    Jan 18, 2022 12:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-18T00:47:18.995Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 18, 2022 12:47:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:47:26.646Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 18, 2022 12:47:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:47:27.356Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 18, 2022 12:47:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:47:27.396Z: Expanding GroupByKey operations into optimizable parts.
    Jan 18, 2022 12:47:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:47:27.426Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 18, 2022 12:47:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:47:27.489Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 18, 2022 12:47:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:47:27.518Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 18, 2022 12:47:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:47:27.600Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 18, 2022 12:47:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:47:27.996Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 18, 2022 12:47:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:47:28.081Z: Starting 5 workers in us-central1-a...
    Jan 18, 2022 12:47:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:47:31.231Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 18, 2022 12:48:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:48:13.883Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 18, 2022 12:48:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:48:37.729Z: Workers have started successfully.
    Jan 18, 2022 12:48:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:48:37.752Z: Workers have started successfully.
    Jan 18, 2022 12:49:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:49:11.051Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 18, 2022 12:49:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:49:11.187Z: Cleaning up.
    Jan 18, 2022 12:49:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:49:11.265Z: Stopping worker pool...
    Jan 18, 2022 12:51:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:51:34.386Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 18, 2022 12:51:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-18T00:51:34.436Z: Worker pool stopped.
    Jan 18, 2022 12:51:41 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-17_16_47_14-862523260568416772 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 96b43d51-9ef0-4ff5-953f-01b65769d1fa and timestamp: 2022-01-18T00:51:41.331000000Z:
                     Metric:                    Value:
                   read_time                    12.015
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 18, 2022 12:51:41 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 6 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 47.648 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 22s
165 actionable tasks: 112 executed, 51 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/gzrlvlef6uiow

Build cache (/home/jenkins/.gradle/caches/build-cache-1) removing files not accessed on or after Tue Jan 11 00:44:23 UTC 2022.
Invalidating in-memory cache of /home/jenkins/.gradle/caches/journal-1/file-access.bin
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleanup deleted 1848 files/directories.
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleaned up in 0.739 secs.
Stopped 5 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2925

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2925/display/redirect?page=changes>

Changes:

[noreply] Remove jcenter repositories from gradle configuration. (#16532)


------------------------------------------
[...truncated 354.32 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2c70d06ab0301621b777fa90c79d804f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 17, 2022 6:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 17, 2022 6:45:21 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 17, 2022 6:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 17, 2022 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 6:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 17, 2022 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 6:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 17, 2022 6:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 17, 2022 6:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 17, 2022 6:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 6:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 17, 2022 6:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 17, 2022 6:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 6:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 17, 2022 6:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1046044067]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 17, 2022 6:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 6:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 6:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 17, 2022 6:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 6:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 6:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 17, 2022 6:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 17, 2022 6:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 17, 2022 6:45:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 17, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 17, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 17, 2022 6:45:32 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1185666401487291075.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-J8O5xSmDkXNJtCne7hl4n7kNuR1EfPazmj979fBCYKs.jar
    Jan 17, 2022 6:45:32 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 17, 2022 6:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 17, 2022 6:45:32 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash 2c4a1c7c8cc11f277462277961521778e2ad07d1e91d146df629d2b90870296c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-LEocfIzBHyd0Yid5YVIXeOKtB9HpHRRt9inSuQhwKWw.pb
    Jan 17, 2022 6:45:35 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 17, 2022 6:45:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 17, 2022 6:45:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 17, 2022 6:45:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 17, 2022 6:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-17_10_45_36-14713740980986987170?project=apache-beam-testing
    Jan 17, 2022 6:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-17_10_45_36-14713740980986987170
    Jan 17, 2022 6:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-17_10_45_36-14713740980986987170
    Jan 17, 2022 6:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-17T18:45:39.545Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 17, 2022 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:45:46.008Z: Worker configuration: e2-standard-2 in us-central1-c.
    Jan 17, 2022 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:45:46.601Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 17, 2022 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:45:46.638Z: Expanding GroupByKey operations into optimizable parts.
    Jan 17, 2022 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:45:46.671Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 17, 2022 6:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:45:46.725Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 17, 2022 6:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:45:46.751Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 17, 2022 6:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:45:46.775Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 17, 2022 6:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:45:47.135Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 17, 2022 6:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:45:47.213Z: Starting 5 workers in us-central1-c...
    Jan 17, 2022 6:46:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:46:14.131Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 17, 2022 6:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:46:27.203Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 17, 2022 6:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:46:52.489Z: Workers have started successfully.
    Jan 17, 2022 6:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:46:52.519Z: Workers have started successfully.
    Jan 17, 2022 6:47:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:47:25.003Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 17, 2022 6:47:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:47:25.165Z: Cleaning up.
    Jan 17, 2022 6:47:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:47:25.217Z: Stopping worker pool...
    Jan 17, 2022 6:49:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:49:49.428Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 17, 2022 6:49:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T18:49:49.469Z: Worker pool stopped.
    Jan 17, 2022 6:49:55 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-17_10_45_36-14713740980986987170 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8fe54b94-ea65-49aa-8230-52e4153c08c5 and timestamp: 2022-01-17T18:49:55.884000000Z:
                     Metric:                    Value:
                   read_time                     8.848
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 17, 2022 6:49:55 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 39.798 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 34s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/zcwldypdvstcs

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2924

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2924/display/redirect?page=changes>

Changes:

[mmack] [BEAM-8806] Integration test for SqsIO using Localstack

[noreply] Merge pull request #16507: [BEAM-13137] Fixes ES utest size flakiness


------------------------------------------
[...truncated 347.51 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4a12bc63e680f7cf6b88d08741fcef8e
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 17, 2022 12:44:49 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 17, 2022 12:44:50 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 17, 2022 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 17, 2022 12:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 12:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 17, 2022 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 17, 2022 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 17, 2022 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 17, 2022 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 17, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 17, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 17, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1046044067]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 17, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 17, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 17, 2022 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 17, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 17, 2022 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 17, 2022 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 17, 2022 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 17, 2022 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1290278633660497410.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-2Uvt5dtTbQgH2aRinQ9IDdATKguou0z0fDdsO_jkgC4.jar
    Jan 17, 2022 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 17, 2022 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 17, 2022 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140655 bytes, hash 030fcc082dcd9d0c4e5e4fe30bb5be1a1133d8e1c143c8247aae7492bd25ea7a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Aw_MCC3NnQxOXk_jC7W-GhEz2OHBQ8gkeq50kr0l6no.pb
    Jan 17, 2022 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 17, 2022 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 17, 2022 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 17, 2022 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 17, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-17_04_45_05-2768318948991004357?project=apache-beam-testing
    Jan 17, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-17_04_45_05-2768318948991004357
    Jan 17, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-17_04_45_05-2768318948991004357
    Jan 17, 2022 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-17T12:45:09.124Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 17, 2022 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:45:17.741Z: Worker configuration: e2-standard-2 in us-central1-b.
    Jan 17, 2022 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:45:18.528Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 17, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:45:18.564Z: Expanding GroupByKey operations into optimizable parts.
    Jan 17, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:45:18.602Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 17, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:45:18.675Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 17, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:45:18.692Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 17, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:45:18.728Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 17, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:45:19.131Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 17, 2022 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:45:19.221Z: Starting 5 workers in us-central1-b...
    Jan 17, 2022 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:45:30.937Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 17, 2022 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:46:03.576Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 17, 2022 12:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:46:33.119Z: Workers have started successfully.
    Jan 17, 2022 12:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:46:33.164Z: Workers have started successfully.
    Jan 17, 2022 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:47:00.909Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 17, 2022 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:47:01.066Z: Cleaning up.
    Jan 17, 2022 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:47:01.152Z: Stopping worker pool...
    Jan 17, 2022 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:49:29.178Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 17, 2022 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T12:49:29.252Z: Worker pool stopped.
    Jan 17, 2022 12:49:35 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-17_04_45_05-2768318948991004357 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3a82daaf-1829-40e6-95eb-5954382910c4 and timestamp: 2022-01-17T12:49:35.458000000Z:
                     Metric:                    Value:
                   read_time                     5.995
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 17, 2022 12:49:35 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 49.496 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 16s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/6kh5vbsibgl6y

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2923

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2923/display/redirect>

Changes:


------------------------------------------
[...truncated 346.21 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4a12bc63e680f7cf6b88d08741fcef8e
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 17, 2022 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 17, 2022 6:44:54 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 17, 2022 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 17, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 17, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 17, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 17, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 17, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 17, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 17, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 17, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1046044067]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 17, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 17, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 17, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 17, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 17, 2022 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 17, 2022 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 17, 2022 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 17, 2022 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3008015366915776119.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-73jDGRlgNDvWnAzPh9-YLjUUQSf3HZoE3ZuNf8fOnfQ.jar
    Jan 17, 2022 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 17, 2022 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 17, 2022 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash 61215169ed8a2cd53b7c49fee2e0885ea699a51d2ac6b15eaa7d7f250e9c2893> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-YSFRae2KLNU7fEn-4uCIXqaZpR0qxrFeqn1_JQ6cKJM.pb
    Jan 17, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 17, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 17, 2022 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 17, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 17, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-16_22_45_10-3122181523630456479?project=apache-beam-testing
    Jan 17, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-16_22_45_10-3122181523630456479
    Jan 17, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-16_22_45_10-3122181523630456479
    Jan 17, 2022 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-17T06:45:13.646Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 17, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:45:23.015Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 17, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:45:23.852Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 17, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:45:23.891Z: Expanding GroupByKey operations into optimizable parts.
    Jan 17, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:45:23.918Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 17, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:45:23.994Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 17, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:45:24.020Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 17, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:45:24.053Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 17, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:45:24.373Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 17, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:45:24.474Z: Starting 5 workers in us-central1-a...
    Jan 17, 2022 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:45:38.640Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 17, 2022 6:46:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:46:13.084Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 17, 2022 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:46:38.581Z: Workers have started successfully.
    Jan 17, 2022 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:46:38.613Z: Workers have started successfully.
    Jan 17, 2022 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:47:06.916Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 17, 2022 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:47:07.065Z: Cleaning up.
    Jan 17, 2022 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:47:07.150Z: Stopping worker pool...
    Jan 17, 2022 6:49:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:49:32.990Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 17, 2022 6:49:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T06:49:33.038Z: Worker pool stopped.
    Jan 17, 2022 6:49:38 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-16_22_45_10-3122181523630456479 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7068a263-893f-437b-b3f7-aa08d11ec087 and timestamp: 2022-01-17T06:49:39.037000000Z:
                     Metric:                    Value:
                   read_time                     7.132
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 17, 2022 6:49:39 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 49.128 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 18s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/skowfxmhbijvo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2922

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2922/display/redirect>

Changes:


------------------------------------------
[...truncated 346.82 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4a12bc63e680f7cf6b88d08741fcef8e
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 17, 2022 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 17, 2022 12:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 17, 2022 12:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 17, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 17, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 17, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 17, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 17, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 17, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 17, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 17, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1588148030]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 17, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 17, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 17, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 17, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 17, 2022 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 17, 2022 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 17, 2022 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 17, 2022 12:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 17, 2022 12:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 17, 2022 12:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8627197064319186824.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-oScsluEgTGxQxcT5m4y7uo3tK94B2aD06Gi5HWL_1Q8.jar
    Jan 17, 2022 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 17, 2022 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 17, 2022 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash 882c549f5a693bd94a48de8886c0c64db77cc4ff4305afcafaeee19ff52f5a21> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-iCxUn1ppO9lKSN6IhsDGTbd8xP9DBa_K-u7hn_UvWiE.pb
    Jan 17, 2022 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 17, 2022 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 17, 2022 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 17, 2022 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 17, 2022 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-16_16_45_09-12569420900427386528?project=apache-beam-testing
    Jan 17, 2022 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-16_16_45_09-12569420900427386528
    Jan 17, 2022 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-16_16_45_09-12569420900427386528
    Jan 17, 2022 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-17T00:45:12.294Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 17, 2022 12:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:45:23.166Z: Worker configuration: e2-standard-2 in us-central1-c.
    Jan 17, 2022 12:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:45:23.861Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 17, 2022 12:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:45:23.899Z: Expanding GroupByKey operations into optimizable parts.
    Jan 17, 2022 12:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:45:23.920Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 17, 2022 12:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:45:23.971Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 17, 2022 12:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:45:23.997Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 17, 2022 12:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:45:24.024Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 17, 2022 12:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:45:24.302Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 17, 2022 12:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:45:24.376Z: Starting 5 workers in us-central1-c...
    Jan 17, 2022 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:45:45.179Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 17, 2022 12:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:46:09.118Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 17, 2022 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:46:34.056Z: Workers have started successfully.
    Jan 17, 2022 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:46:34.086Z: Workers have started successfully.
    Jan 17, 2022 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:47:04.751Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 17, 2022 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:47:04.899Z: Cleaning up.
    Jan 17, 2022 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:47:04.986Z: Stopping worker pool...
    Jan 17, 2022 12:49:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:49:29.171Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 17, 2022 12:49:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-17T00:49:29.211Z: Worker pool stopped.
    Jan 17, 2022 12:49:35 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-16_16_45_09-12569420900427386528 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 13b40cdf-21e5-4e7b-b7e0-240e6a16c759 and timestamp: 2022-01-17T00:49:35.693000000Z:
                     Metric:                    Value:
                   read_time                     6.415
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 17, 2022 12:49:35 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 47.656 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 16s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/rfwxtrs27pipm

Build cache (/home/jenkins/.gradle/caches/build-cache-1) removing files not accessed on or after Mon Jan 10 00:44:24 UTC 2022.
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleanup deleted 526 files/directories.
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleaned up in 0.639 secs.
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2921

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2921/display/redirect>

Changes:


------------------------------------------
[...truncated 346.72 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 16, 2022 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 16, 2022 6:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 16, 2022 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 16, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 16, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 16, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 16, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 16, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 16, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 16, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 16, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 16, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 16, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1046044067]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 16, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 16, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 16, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 16, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 16, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 16, 2022 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 16, 2022 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 16, 2022 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 16, 2022 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 16, 2022 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1348278256366651510.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-aNwuktb42ev4Gl8mdnpW_-KNR9_nlg2d6MhUiB1PVqM.jar
    Jan 16, 2022 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-lang/commons-lang/2.6/ce1edb914c94ebc388f086c6827e8bdeec71ac2/commons-lang-2.6.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-lang-2.6-UPEbCfh3wpTVbyRGP0fSj5Kc9QRPZIZhwPDPuumi9Jw.jar
    Jan 16, 2022 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-io/commons-io/2.6/815893df5f31da2ece4040fe0a12fd44b577afaf/commons-io-2.6.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-io-2.6--HfTBGYKwqFC84ZbrfyXHex-1zx0fH-NXS9ROcpzZRM.jar
    Jan 16, 2022 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 354 files cached, 3 files newly uploaded in 0 seconds
    Jan 16, 2022 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 16, 2022 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash a47c0e47eca2da3c6e4d918b7c397b68ee3da2f57e7f100e97e9333f5275d452> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-pHwOR-yi2jxuTZGLfDl7aO49ovV-fxAOl-kzP1J11FI.pb
    Jan 16, 2022 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 16, 2022 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 16, 2022 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 16, 2022 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 16, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-16_10_45_10-2140565856958614755?project=apache-beam-testing
    Jan 16, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-16_10_45_10-2140565856958614755
    Jan 16, 2022 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-16_10_45_10-2140565856958614755
    Jan 16, 2022 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-16T18:45:13.290Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 16, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:45:22.436Z: Worker configuration: e2-standard-2 in us-central1-c.
    Jan 16, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:45:23.029Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 16, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:45:23.059Z: Expanding GroupByKey operations into optimizable parts.
    Jan 16, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:45:23.082Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 16, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:45:23.141Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 16, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:45:23.166Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 16, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:45:23.199Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 16, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:45:23.639Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 16, 2022 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:45:23.720Z: Starting 5 workers in us-central1-c...
    Jan 16, 2022 6:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:45:54.186Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 16, 2022 6:46:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:46:11.726Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 16, 2022 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:46:36.684Z: Workers have started successfully.
    Jan 16, 2022 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:46:36.701Z: Workers have started successfully.
    Jan 16, 2022 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:47:05.304Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 16, 2022 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:47:05.456Z: Cleaning up.
    Jan 16, 2022 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:47:05.536Z: Stopping worker pool...
    Jan 16, 2022 6:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:49:30.241Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 16, 2022 6:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T18:49:30.294Z: Worker pool stopped.
    Jan 16, 2022 6:49:35 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-16_10_45_10-2140565856958614755 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1d7d43bb-fe20-4e77-aefc-d43e49d4d523 and timestamp: 2022-01-16T18:49:35.913000000Z:
                     Metric:                    Value:
                   read_time                     7.338
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 16, 2022 6:49:36 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 46.242 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/jpoycrb5kjesq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2920

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2920/display/redirect>

Changes:


------------------------------------------
[...truncated 301.57 KB...]

> Task :runners:direct-java:compileTestJava FROM-CACHE
Custom actions are attached to task ':runners:direct-java:compileTestJava'.
Build cache key for task ':runners:direct-java:compileTestJava' is 4e2a9d860b8de4df0e3f8500f5a9e282
Task ':runners:direct-java:compileTestJava' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':runners:direct-java:compileTestJava' with cache key 4e2a9d860b8de4df0e3f8500f5a9e282
:runners:direct-java:compileTestJava (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.149 secs.
:runners:direct-java:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) started.

> Task :runners:direct-java:testClasses UP-TO-DATE
Skipping task ':runners:direct-java:testClasses' as it has no actions.
:runners:direct-java:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:runners:direct-java:shadowTestJar (Thread[Execution worker for ':' Thread 8,5,main]) started.

> Task :sdks:java:io:google-cloud-platform:jar
Caching disabled for task ':sdks:java:io:google-cloud-platform:jar' because:
  Not worth caching
Task ':sdks:java:io:google-cloud-platform:jar' is not up-to-date because:
  No history is available.
file or directory '<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/google-cloud-platform/build/resources/main',> not found
:sdks:java:io:google-cloud-platform:jar (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.316 secs.
:runners:google-cloud-dataflow-java:compileJava (Thread[Execution worker for ':' Thread 3,5,main]) started.

> Task :runners:direct-java:shadowTestJar FROM-CACHE
Build cache key for task ':runners:direct-java:shadowTestJar' is 9aa0b05576a160bd84360e768bec5b0e
Task ':runners:direct-java:shadowTestJar' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':runners:direct-java:shadowTestJar' with cache key 9aa0b05576a160bd84360e768bec5b0e
:runners:direct-java:shadowTestJar (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.25 secs.
:sdks:java:io:mongodb:compileTestJava (Thread[Execution worker for ':' Thread 8,5,main]) started.

> Task :runners:google-cloud-dataflow-java:compileJava FROM-CACHE
Custom actions are attached to task ':runners:google-cloud-dataflow-java:compileJava'.
Build cache key for task ':runners:google-cloud-dataflow-java:compileJava' is 22c8f38254b2237b27080904af88d672
Task ':runners:google-cloud-dataflow-java:compileJava' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':runners:google-cloud-dataflow-java:compileJava' with cache key 22c8f38254b2237b27080904af88d672
:runners:google-cloud-dataflow-java:compileJava (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.252 secs.
:runners:google-cloud-dataflow-java:classes (Thread[Execution worker for ':' Thread 3,5,main]) started.

> Task :runners:google-cloud-dataflow-java:classes
Skipping task ':runners:google-cloud-dataflow-java:classes' as it has no actions.
:runners:google-cloud-dataflow-java:classes (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:runners:google-cloud-dataflow-java:jar (Thread[Execution worker for ':' Thread 3,5,main]) started.

> Task :runners:google-cloud-dataflow-java:jar
Caching disabled for task ':runners:google-cloud-dataflow-java:jar' because:
  Not worth caching
Task ':runners:google-cloud-dataflow-java:jar' is not up-to-date because:
  No history is available.
:runners:google-cloud-dataflow-java:jar (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.096 secs.
:runners:google-cloud-dataflow-java:worker:legacy-worker:compileJava (Thread[Execution worker for ':' Thread 3,5,main]) started.

> Task :sdks:java:extensions:protobuf:extractIncludeTestProto
Caching disabled for task ':sdks:java:extensions:protobuf:extractIncludeTestProto' because:
  Caching has not been enabled for the task
Task ':sdks:java:extensions:protobuf:extractIncludeTestProto' is not up-to-date because:
  No history is available.
:sdks:java:extensions:protobuf:extractIncludeTestProto (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 1.223 secs.
:sdks:java:extensions:protobuf:generateTestProto (Thread[Execution worker for ':' Thread 7,5,main]) started.

> Task :sdks:java:extensions:protobuf:generateTestProto FROM-CACHE
Build cache key for task ':sdks:java:extensions:protobuf:generateTestProto' is 362c3872da7f8ff731e7c2c89b63c9ee
Task ':sdks:java:extensions:protobuf:generateTestProto' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':sdks:java:extensions:protobuf:generateTestProto' with cache key 362c3872da7f8ff731e7c2c89b63c9ee
:sdks:java:extensions:protobuf:generateTestProto (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.016 secs.
:sdks:java:extensions:protobuf:compileTestJava (Thread[Execution worker for ':' Thread 7,5,main]) started.

> Task :sdks:java:extensions:protobuf:compileTestJava FROM-CACHE
Custom actions are attached to task ':sdks:java:extensions:protobuf:compileTestJava'.
Build cache key for task ':sdks:java:extensions:protobuf:compileTestJava' is a3ff4f1c9f2329937c2b20b0e1df795e
Task ':sdks:java:extensions:protobuf:compileTestJava' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':sdks:java:extensions:protobuf:compileTestJava' with cache key a3ff4f1c9f2329937c2b20b0e1df795e
:sdks:java:extensions:protobuf:compileTestJava (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.095 secs.
:sdks:java:extensions:protobuf:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) started.

> Task :sdks:java:extensions:protobuf:testClasses
Skipping task ':sdks:java:extensions:protobuf:testClasses' as it has no actions.
:sdks:java:extensions:protobuf:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:protobuf:testJar (Thread[Execution worker for ':' Thread 7,5,main]) started.

> Task :sdks:java:io:mongodb:compileTestJava FROM-CACHE
Custom actions are attached to task ':sdks:java:io:mongodb:compileTestJava'.
Build cache key for task ':sdks:java:io:mongodb:compileTestJava' is a99bf448e5d4ad00b6391e64438e4466
Task ':sdks:java:io:mongodb:compileTestJava' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':sdks:java:io:mongodb:compileTestJava' with cache key a99bf448e5d4ad00b6391e64438e4466
:sdks:java:io:mongodb:compileTestJava (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.272 secs.
:sdks:java:io:mongodb:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) started.

> Task :sdks:java:io:mongodb:testClasses UP-TO-DATE
Skipping task ':sdks:java:io:mongodb:testClasses' as it has no actions.
:sdks:java:io:mongodb:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:io:mongodb:testJar (Thread[Execution worker for ':' Thread 8,5,main]) started.

> Task :sdks:java:io:mongodb:testJar
Caching disabled for task ':sdks:java:io:mongodb:testJar' because:
  Not worth caching
Task ':sdks:java:io:mongodb:testJar' is not up-to-date because:
  No history is available.
file or directory '<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/resources/test',> not found
:sdks:java:io:mongodb:testJar (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.01 secs.

> Task :sdks:java:extensions:protobuf:testJar
Caching disabled for task ':sdks:java:extensions:protobuf:testJar' because:
  Not worth caching
Task ':sdks:java:extensions:protobuf:testJar' is not up-to-date because:
  No history is available.
:sdks:java:extensions:protobuf:testJar (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.072 secs.
:sdks:java:io:google-cloud-platform:compileTestJava (Thread[Execution worker for ':' Thread 7,5,main]) started.

> Task :runners:google-cloud-dataflow-java:worker:legacy-worker:compileJava FROM-CACHE
file or directory '<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/src/main/java',> not found
Custom actions are attached to task ':runners:google-cloud-dataflow-java:worker:legacy-worker:compileJava'.
Build cache key for task ':runners:google-cloud-dataflow-java:worker:legacy-worker:compileJava' is 544e9240bd53508e4501b20ef1138800
Task ':runners:google-cloud-dataflow-java:worker:legacy-worker:compileJava' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':runners:google-cloud-dataflow-java:worker:legacy-worker:compileJava' with cache key 544e9240bd53508e4501b20ef1138800
:runners:google-cloud-dataflow-java:worker:legacy-worker:compileJava (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.393 secs.
:runners:google-cloud-dataflow-java:worker:legacy-worker:classes (Thread[Execution worker for ':' Thread 3,5,main]) started.

> Task :runners:google-cloud-dataflow-java:worker:legacy-worker:classes UP-TO-DATE
Skipping task ':runners:google-cloud-dataflow-java:worker:legacy-worker:classes' as it has no actions.
:runners:google-cloud-dataflow-java:worker:legacy-worker:classes (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:runners:google-cloud-dataflow-java:worker:legacy-worker:shadowJar (Thread[Execution worker for ':' Thread 4,5,main]) started.

> Task :runners:google-cloud-dataflow-java:worker:legacy-worker:shadowJar FROM-CACHE
Custom actions are attached to task ':runners:google-cloud-dataflow-java:worker:legacy-worker:shadowJar'.
Build cache key for task ':runners:google-cloud-dataflow-java:worker:legacy-worker:shadowJar' is f82993ca17ee017e5c0031efa41de682
Task ':runners:google-cloud-dataflow-java:worker:legacy-worker:shadowJar' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':runners:google-cloud-dataflow-java:worker:legacy-worker:shadowJar' with cache key f82993ca17ee017e5c0031efa41de682
:runners:google-cloud-dataflow-java:worker:legacy-worker:shadowJar (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.176 secs.

> Task :sdks:java:io:google-cloud-platform:compileTestJava FROM-CACHE
Custom actions are attached to task ':sdks:java:io:google-cloud-platform:compileTestJava'.
Build cache key for task ':sdks:java:io:google-cloud-platform:compileTestJava' is fdc21df0c55a2885ddf5128c239e3e93
Task ':sdks:java:io:google-cloud-platform:compileTestJava' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':sdks:java:io:google-cloud-platform:compileTestJava' with cache key fdc21df0c55a2885ddf5128c239e3e93
:sdks:java:io:google-cloud-platform:compileTestJava (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.455 secs.
:sdks:java:io:google-cloud-platform:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) started.

> Task :sdks:java:io:google-cloud-platform:testClasses
Skipping task ':sdks:java:io:google-cloud-platform:testClasses' as it has no actions.
:sdks:java:io:google-cloud-platform:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:io:google-cloud-platform:testJar (Thread[Execution worker for ':' Thread 7,5,main]) started.

> Task :sdks:java:io:google-cloud-platform:testJar
Caching disabled for task ':sdks:java:io:google-cloud-platform:testJar' because:
  Not worth caching
Task ':sdks:java:io:google-cloud-platform:testJar' is not up-to-date because:
  No history is available.
:sdks:java:io:google-cloud-platform:testJar (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.26 secs.
:runners:google-cloud-dataflow-java:compileTestJava (Thread[Execution worker for ':' Thread 7,5,main]) started.

> Task :runners:google-cloud-dataflow-java:compileTestJava FROM-CACHE
Custom actions are attached to task ':runners:google-cloud-dataflow-java:compileTestJava'.
Build cache key for task ':runners:google-cloud-dataflow-java:compileTestJava' is 4fb76d1f4cea3fb2f439389f027f371b
Task ':runners:google-cloud-dataflow-java:compileTestJava' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':runners:google-cloud-dataflow-java:compileTestJava' with cache key 4fb76d1f4cea3fb2f439389f027f371b
:runners:google-cloud-dataflow-java:compileTestJava (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.199 secs.
:runners:google-cloud-dataflow-java:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) started.

> Task :runners:google-cloud-dataflow-java:testClasses UP-TO-DATE
Skipping task ':runners:google-cloud-dataflow-java:testClasses' as it has no actions.
:runners:google-cloud-dataflow-java:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:runners:google-cloud-dataflow-java:testJar (Thread[Execution worker for ':' Thread 7,5,main]) started.

> Task :runners:google-cloud-dataflow-java:testJar
Caching disabled for task ':runners:google-cloud-dataflow-java:testJar' because:
  Not worth caching
Task ':runners:google-cloud-dataflow-java:testJar' is not up-to-date because:
  No history is available.
file or directory '<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/build/resources/test',> not found
:runners:google-cloud-dataflow-java:testJar (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.03 secs.

> Task :sdks:java:extensions:sql:generateFmppSources
Failed to get resource: HEAD. [HTTP HTTP/1.1 502 Bad Gateway: https://jcenter.bintray.com/oro/oro/maven-metadata.xml)]
Failed to get resource: HEAD. [HTTP HTTP/1.1 502 Bad Gateway: https://jcenter.bintray.com/oro/oro/maven-metadata.xml)]
Failed to get resource: HEAD. [HTTP HTTP/1.1 502 Bad Gateway: https://jcenter.bintray.com/oro/oro/maven-metadata.xml)]

> Task :sdks:java:extensions:sql:generateFmppSources FAILED
:sdks:java:extensions:sql:generateFmppSources (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 31.522 secs.
:sdks:java:extensions:sql:createCheckerFrameworkManifest (Thread[Execution worker for ':' Thread 8,5,main]) started.

> Task :sdks:java:extensions:sql:createCheckerFrameworkManifest
Caching disabled for task ':sdks:java:extensions:sql:createCheckerFrameworkManifest' because:
  Caching has not been enabled for the task
Task ':sdks:java:extensions:sql:createCheckerFrameworkManifest' is not up-to-date because:
  No history is available.
:sdks:java:extensions:sql:createCheckerFrameworkManifest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.004 secs.
:sdks:java:extensions:sql:processResources (Thread[Execution worker for ':' Thread 8,5,main]) started.

> Task :sdks:java:extensions:sql:processResources
Caching disabled for task ':sdks:java:extensions:sql:processResources' because:
  Not worth caching
Task ':sdks:java:extensions:sql:processResources' is not up-to-date because:
  No history is available.
:sdks:java:extensions:sql:processResources (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.019 secs.
:sdks:java:extensions:sql:processTestResources (Thread[Execution worker for ':' Thread 8,5,main]) started.

> Task :sdks:java:extensions:sql:processTestResources
Caching disabled for task ':sdks:java:extensions:sql:processTestResources' because:
  Not worth caching
Task ':sdks:java:extensions:sql:processTestResources' is not up-to-date because:
  No history is available.
:sdks:java:extensions:sql:processTestResources (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.005 secs.

FAILURE: Build failed with an exception.

* Where:
Build file '<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build.gradle'> line: 176

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:generateFmppSources'.
> Could not resolve all files for configuration ':sdks:java:extensions:sql:fmppTask'.
   > Could not resolve oro:oro:[2.0.8,).
     Required by:
         project :sdks:java:extensions:sql > com.googlecode.fmpp-maven-plugin:fmpp-maven-plugin:1.0 > net.sourceforge.fmpp:fmpp:0.9.14
      > Failed to list versions for oro:oro.
         > Unable to load Maven meta-data from https://jcenter.bintray.com/oro/oro/maven-metadata.xml.
            > Could not HEAD 'https://jcenter.bintray.com/oro/oro/maven-metadata.xml'. Received status code 502 from server: Bad Gateway

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 50s
158 actionable tasks: 97 executed, 59 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/jqfb4pwccu2w6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2919

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2919/display/redirect>

Changes:


------------------------------------------
[...truncated 346.53 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4a12bc63e680f7cf6b88d08741fcef8e
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 16, 2022 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 16, 2022 6:44:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 16, 2022 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 16, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 16, 2022 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 16, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 16, 2022 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 16, 2022 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 16, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 16, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 16, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 16, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 16, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1046044067]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 16, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 16, 2022 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 16, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 16, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 16, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 16, 2022 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 16, 2022 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 16, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 16, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 16, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test292049469859881088.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-g5TdK7N5_nfNJ1TCQvjFb4PwZgwYX7TvH6QpZ-tRWSU.jar
    Jan 16, 2022 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 1 seconds
    Jan 16, 2022 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 16, 2022 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash acf1f555d80854270ca60ae641a9ffaa60928786c6c3e3d6b6f27ff18f7ae01e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-rPH1VdgIVCcMpgrmQan_qmCSh4bGw-PWtvJ_8Y964B4.pb
    Jan 16, 2022 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 16, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 16, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 16, 2022 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 16, 2022 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-15_22_45_14-14748712355895809163?project=apache-beam-testing
    Jan 16, 2022 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-15_22_45_14-14748712355895809163
    Jan 16, 2022 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-15_22_45_14-14748712355895809163
    Jan 16, 2022 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-16T06:45:17.762Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 16, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:45:28.199Z: Worker configuration: e2-standard-2 in us-central1-b.
    Jan 16, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:45:28.865Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 16, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:45:28.922Z: Expanding GroupByKey operations into optimizable parts.
    Jan 16, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:45:28.956Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 16, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:45:29.019Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 16, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:45:29.037Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 16, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:45:29.068Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 16, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:45:29.494Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 16, 2022 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:45:29.569Z: Starting 5 workers in us-central1-b...
    Jan 16, 2022 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:45:39.089Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 16, 2022 6:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:46:18.549Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 16, 2022 6:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:46:44.566Z: Workers have started successfully.
    Jan 16, 2022 6:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:46:44.595Z: Workers have started successfully.
    Jan 16, 2022 6:47:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:47:21.023Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 16, 2022 6:47:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:47:21.151Z: Cleaning up.
    Jan 16, 2022 6:47:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:47:21.234Z: Stopping worker pool...
    Jan 16, 2022 6:49:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:49:43.127Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 16, 2022 6:49:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T06:49:43.193Z: Worker pool stopped.
    Jan 16, 2022 6:49:48 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-15_22_45_14-14748712355895809163 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): fa49d02a-da37-479d-b3d2-7df909f8aa5b and timestamp: 2022-01-16T06:49:48.470000000Z:
                     Metric:                    Value:
                   read_time                     8.993
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 16, 2022 6:49:48 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 55.049 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 25s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/ggljsjlam6jhu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2918

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2918/display/redirect>

Changes:


------------------------------------------
[...truncated 346.46 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4a12bc63e680f7cf6b88d08741fcef8e
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 16, 2022 12:44:49 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 16, 2022 12:44:50 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 16, 2022 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 16, 2022 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 16, 2022 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 16, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 16, 2022 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 16, 2022 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@167018904]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 16, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 16, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 16, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 16, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 16, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2102730438]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 16, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 16, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 16, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 16, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 16, 2022 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 16, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 16, 2022 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 16, 2022 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 16, 2022 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 16, 2022 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 16, 2022 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5744109758578874613.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-gk1lTHiEkvDg18_U1luNrRO6Bptxta-wOiUVKYRXBS8.jar
    Jan 16, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 1 seconds
    Jan 16, 2022 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 16, 2022 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash eee41eac7b16e505615be62a5e71f4bd2869f1bff4f634a079632bb108ce1619> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-7uQerHsW5QVhW-YqXnH0vShp8b_09jSgeWMrsQjOFhk.pb
    Jan 16, 2022 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 16, 2022 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 16, 2022 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 16, 2022 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 16, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-15_16_45_05-1973403425643067297?project=apache-beam-testing
    Jan 16, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-15_16_45_05-1973403425643067297
    Jan 16, 2022 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-15_16_45_05-1973403425643067297
    Jan 16, 2022 12:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-16T00:45:09.077Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 16, 2022 12:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:45:17.605Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 16, 2022 12:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:45:18.346Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 16, 2022 12:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:45:18.386Z: Expanding GroupByKey operations into optimizable parts.
    Jan 16, 2022 12:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:45:18.414Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 16, 2022 12:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:45:18.481Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 16, 2022 12:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:45:18.503Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 16, 2022 12:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:45:18.529Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 16, 2022 12:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:45:18.857Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 16, 2022 12:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:45:18.940Z: Starting 5 workers in us-central1-a...
    Jan 16, 2022 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:45:30.633Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 16, 2022 12:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:46:12.749Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 16, 2022 12:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:46:38.374Z: Workers have started successfully.
    Jan 16, 2022 12:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:46:38.410Z: Workers have started successfully.
    Jan 16, 2022 12:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:47:08.984Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 16, 2022 12:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:47:09.150Z: Cleaning up.
    Jan 16, 2022 12:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:47:09.238Z: Stopping worker pool...
    Jan 16, 2022 12:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:49:42.870Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 16, 2022 12:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-16T00:49:42.926Z: Worker pool stopped.
    Jan 16, 2022 12:49:50 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-15_16_45_05-1973403425643067297 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0146d90f-9ef9-4a8f-8967-df192c320ef9 and timestamp: 2022-01-16T00:49:50.049000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.443

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 16, 2022 12:49:50 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 5 mins 4.263 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/2lggqln6fg5mw

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2917

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2917/display/redirect>

Changes:


------------------------------------------
[...truncated 346.28 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4a12bc63e680f7cf6b88d08741fcef8e
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 15, 2022 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 15, 2022 6:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 15, 2022 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 15, 2022 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 15, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 15, 2022 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 15, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 15, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 15, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 15, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 15, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1588148030]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 15, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 15, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 15, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 15, 2022 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 15, 2022 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 15, 2022 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 15, 2022 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 15, 2022 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7088492201749908801.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-tdPjQUDX00OfgrvlCB3z_S9qMcBPWiB4mhUBizvq7Os.jar
    Jan 15, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 15, 2022 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 15, 2022 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash 84a593b8af4c59f4c515cd05837eca40ee0e8f465cdb2f3e351a5b92ae1e3afc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-hKWTuK9MWfTFFc0Fg37KQO4Oj0Zc2y8-NRpbkq4eOvw.pb
    Jan 15, 2022 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 15, 2022 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 15, 2022 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 15, 2022 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 15, 2022 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-15_10_45_07-7538628477442505826?project=apache-beam-testing
    Jan 15, 2022 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-15_10_45_07-7538628477442505826
    Jan 15, 2022 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-15_10_45_07-7538628477442505826
    Jan 15, 2022 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-15T18:45:11.041Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 15, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:45:19.668Z: Worker configuration: e2-standard-2 in us-central1-b.
    Jan 15, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:45:20.407Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 15, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:45:20.473Z: Expanding GroupByKey operations into optimizable parts.
    Jan 15, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:45:20.500Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 15, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:45:20.617Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 15, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:45:20.643Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 15, 2022 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:45:20.680Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 15, 2022 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:45:21.025Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 15, 2022 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:45:21.094Z: Starting 5 workers in us-central1-b...
    Jan 15, 2022 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:45:52.843Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 15, 2022 6:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:46:12.765Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 15, 2022 6:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:46:40.464Z: Workers have started successfully.
    Jan 15, 2022 6:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:46:40.524Z: Workers have started successfully.
    Jan 15, 2022 6:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:47:15.007Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 15, 2022 6:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:47:15.145Z: Cleaning up.
    Jan 15, 2022 6:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:47:15.221Z: Stopping worker pool...
    Jan 15, 2022 6:49:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:49:43.976Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 15, 2022 6:49:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T18:49:44.019Z: Worker pool stopped.
    Jan 15, 2022 6:49:53 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-15_10_45_07-7538628477442505826 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d5e895d3-aecd-4b73-a6f5-57fee666e162 and timestamp: 2022-01-15T18:49:54.001000000Z:
                     Metric:                    Value:
                   read_time                     8.308
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 15, 2022 6:49:54 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 5 mins 5.615 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 33s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/dfkz4qfecxr3o

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2916

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2916/display/redirect>

Changes:


------------------------------------------
[...truncated 347.11 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 15, 2022 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 15, 2022 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 15, 2022 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 15, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 15, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 15, 2022 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 15, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 15, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 15, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 15, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 15, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1046044067]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 15, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 15, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 15, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 15, 2022 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 15, 2022 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 15, 2022 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 15, 2022 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 15, 2022 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1448064984237509483.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-73pp0Ce070QNjdRB6Evc0crk5bPft4He3omi9ICQDBU.jar
    Jan 15, 2022 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 15, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 15, 2022 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash cd92b8aa324b32e0b26e0563bdb9a1f10c7ca37e6e7d5057a014be368449389b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-zZK4qjJLMuCybgVjvbmh8Qx8o35ufVBXoBS-NoRJOJs.pb
    Jan 15, 2022 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 15, 2022 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 15, 2022 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 15, 2022 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 15, 2022 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-15_04_45_10-5032831551359017185?project=apache-beam-testing
    Jan 15, 2022 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-15_04_45_10-5032831551359017185
    Jan 15, 2022 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-15_04_45_10-5032831551359017185
    Jan 15, 2022 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-15T12:45:13.508Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 15, 2022 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:45:21.899Z: Worker configuration: e2-standard-2 in us-central1-b.
    Jan 15, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:45:22.542Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 15, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:45:22.580Z: Expanding GroupByKey operations into optimizable parts.
    Jan 15, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:45:22.615Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 15, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:45:22.676Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 15, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:45:22.727Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 15, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:45:22.750Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 15, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:45:23.086Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 15, 2022 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:45:23.196Z: Starting 5 workers in us-central1-b...
    Jan 15, 2022 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:45:39.904Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 15, 2022 12:46:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:46:07.702Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 15, 2022 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:46:33.535Z: Workers have started successfully.
    Jan 15, 2022 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:46:33.576Z: Workers have started successfully.
    Jan 15, 2022 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:47:08.141Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 15, 2022 12:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:47:08.286Z: Cleaning up.
    Jan 15, 2022 12:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:47:08.367Z: Stopping worker pool...
    Jan 15, 2022 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:49:31.380Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 15, 2022 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T12:49:31.438Z: Worker pool stopped.
    Jan 15, 2022 12:49:39 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-15_04_45_10-5032831551359017185 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 54bca9aa-9995-4e0d-9f32-686785f555ae and timestamp: 2022-01-15T12:49:39.134000000Z:
                     Metric:                    Value:
                   read_time                     8.503
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 15, 2022 12:49:39 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 50.527 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 18s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/edt3vinbecwjo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2915

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2915/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13664] Fix Primitives hashing benchmark (#16523)


------------------------------------------
[...truncated 346.39 KB...]
Starting process 'Gradle Test Executor 121'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 121'
Successfully started process 'Gradle Test Executor 121'

Gradle Test Executor 121 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 15, 2022 6:44:37 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 15, 2022 6:44:38 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 15, 2022 6:44:39 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 15, 2022 6:44:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 6:44:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 6:44:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 15, 2022 6:44:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 6:44:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 6:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 15, 2022 6:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 15, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 15, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 15, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 15, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 15, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1588148030]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 15, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 15, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 6:44:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 15, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 15, 2022 6:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 15, 2022 6:44:45 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 15, 2022 6:44:49 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 15, 2022 6:44:49 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 15, 2022 6:44:49 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7912396709672140055.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ah4R2Vzgx1xmkbNJtJOw_fTr3aOZ-eWZzSQPEBsYs7I.jar
    Jan 15, 2022 6:44:49 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mongodb/mongo-java-driver/3.12.7/1f45c6a397feeb46da75425619333d1cc6f90f78/mongo-java-driver-3.12.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mongo-java-driver-3.12.7-D_zgBJWNb9mzmuetJ37a0X9XtpcfSGsXYpxe6eE8Tao.jar
    Jan 15, 2022 6:44:49 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 355 files cached, 2 files newly uploaded in 0 seconds
    Jan 15, 2022 6:44:49 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 15, 2022 6:44:50 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash 6984b8a1fee29e7a1b9f430ad430e5cb5b952638ba511e2037cda8d0410d831d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-aYS4of7innobn0MK1DDly1uVJji6UR4gN82o0EENgx0.pb
    Jan 15, 2022 6:44:53 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 15, 2022 6:44:53 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 15, 2022 6:44:53 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 15, 2022 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 15, 2022 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-14_22_44_53-15231231887100415225?project=apache-beam-testing
    Jan 15, 2022 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-14_22_44_53-15231231887100415225
    Jan 15, 2022 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-14_22_44_53-15231231887100415225
    Jan 15, 2022 6:44:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-15T06:44:56.676Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 15, 2022 6:45:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:45:06.297Z: Worker configuration: e2-standard-2 in us-central1-b.
    Jan 15, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:45:07.289Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 15, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:45:07.329Z: Expanding GroupByKey operations into optimizable parts.
    Jan 15, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:45:07.358Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 15, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:45:07.431Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 15, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:45:07.464Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 15, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:45:07.491Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 15, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:45:07.956Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 15, 2022 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:45:08.012Z: Starting 5 workers in us-central1-b...
    Jan 15, 2022 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:45:24.642Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 15, 2022 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:45:49.188Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 15, 2022 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:46:16.064Z: Workers have started successfully.
    Jan 15, 2022 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:46:16.082Z: Workers have started successfully.
    Jan 15, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:46:48.397Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 15, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:46:48.530Z: Cleaning up.
    Jan 15, 2022 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:46:48.615Z: Stopping worker pool...
    Jan 15, 2022 6:49:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:49:15.755Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 15, 2022 6:49:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T06:49:15.820Z: Worker pool stopped.
    Jan 15, 2022 6:49:21 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-14_22_44_53-15231231887100415225 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a98bfeb9-4065-4714-bac5-a15eb20daf0a and timestamp: 2022-01-15T06:49:21.856000000Z:
                     Metric:                    Value:
                   read_time                     7.599
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 15, 2022 6:49:21 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 121 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.001 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.002 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 49.944 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 3s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/xxv2wwevbi2no

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2914

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2914/display/redirect?page=changes>

Changes:

[daria.malkova] Split builder into several builder for each step of pipeline execution

[Valentyn Tymofieiev] Provide API to check whether a hint is known.

[Valentyn Tymofieiev] [BEAM-12558] Fix doc typo.

[jrmccluskey] [BEAM-10206] Resolve go vet errors in protox package

[noreply] Merge pull request #16482 from [BEAM-13429][Playground] Add builder for

[noreply] [BEAM-13590] Fix  abc imports from collections (#15850)

[jrmccluskey] Fix staticcheck errors in transforms directory

[jrmccluskey] Remove unnecessary fmt.Sprintf() in partition.go

[jrmccluskey] Replace bytes.Compare() with bytes.Equal() in test cases

[jrmccluskey] Replace string(buf.Bytes()) with buf.String() in coder_test.go

[jrmccluskey] Remove unnecessary blank identifier assignment in harness.go

[jrmccluskey] fix capitalized error strings in expansionx

[jrmccluskey] Clean up string cast of bytes in vet.go and corresponding tests

[jrmccluskey] Remove unnecessary fmt call in universal.go

[Robert Bradshaw] Remove tab from source.

[noreply] Redirecting cross-language transforms content (#16504)

[noreply] doc tweaks (#16498)

[noreply] [BEAM-12621] - Update Jenkins VMs to modern Ubuntu version (#16457)


------------------------------------------
[...truncated 346.41 KB...]
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 15, 2022 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 15, 2022 12:44:54 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 15, 2022 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 15, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 15, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 15, 2022 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 15, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 15, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 15, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 15, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 15, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1046044067]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 15, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 15, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 15, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 15, 2022 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 15, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 15, 2022 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 15, 2022 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 15, 2022 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 15, 2022 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 15, 2022 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7400399100655922583.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-aVOWOCRaTTEp6Mm9-zXYwpY9HwaJsd1QpUyKLmVWmM0.jar
    Jan 15, 2022 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigtable-emulator/0.137.1/b1639aa134de1302e43d9e9c3843f6ff853c510f/google-cloud-bigtable-emulator-0.137.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigtable-emulator-0.137.1-V2SlLU4BjIGf5QoF7YL-A-QNjw49NdXrNM4QoX9MXSI.jar
    Jan 15, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 355 files cached, 2 files newly uploaded in 2 seconds
    Jan 15, 2022 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 15, 2022 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash df901c65c604c7275d158b27f05620df52c627bc2ccb75640e87bf51e39eceee> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-35AcZcYExyddFYsn8FYg31LGJ7wsy3VkDoe_UeOezu4.pb
    Jan 15, 2022 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 15, 2022 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 15, 2022 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 15, 2022 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 15, 2022 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-14_16_45_13-13595953685789249546?project=apache-beam-testing
    Jan 15, 2022 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-14_16_45_13-13595953685789249546
    Jan 15, 2022 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-14_16_45_13-13595953685789249546
    Jan 15, 2022 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-15T00:45:16.778Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 15, 2022 12:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:45:25.267Z: Worker configuration: e2-standard-2 in us-central1-b.
    Jan 15, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:45:25.983Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 15, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:45:26.029Z: Expanding GroupByKey operations into optimizable parts.
    Jan 15, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:45:26.077Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 15, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:45:26.173Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 15, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:45:26.212Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 15, 2022 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:45:26.249Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 15, 2022 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:45:26.681Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 15, 2022 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:45:26.771Z: Starting 5 workers in us-central1-b...
    Jan 15, 2022 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:45:48.945Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 15, 2022 12:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:46:15.071Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 15, 2022 12:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:46:40.800Z: Workers have started successfully.
    Jan 15, 2022 12:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:46:40.847Z: Workers have started successfully.
    Jan 15, 2022 12:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:47:14.262Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 15, 2022 12:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:47:14.438Z: Cleaning up.
    Jan 15, 2022 12:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:47:14.543Z: Stopping worker pool...
    Jan 15, 2022 12:49:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:49:36.941Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 15, 2022 12:49:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-15T00:49:37.023Z: Worker pool stopped.
    Jan 15, 2022 12:49:42 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-14_16_45_13-13595953685789249546 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): da13bfb4-51fb-4d01-ba5b-104374604715 and timestamp: 2022-01-15T00:49:43.033000000Z:
                     Metric:                    Value:
                   read_time                     8.721
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 15, 2022 12:49:43 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 55.088 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 22s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
Publishing build scan failed due to network error 'java.net.SocketTimeoutException: Read timed out' (2 retries remaining)...
https://scans.gradle.com/s/hzcewuibb4eb2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2913

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2913/display/redirect?page=changes>

Changes:

[arietis27] [BEAM-13400] JDBC IO does not support UUID and JSONB PostgreSQL types


------------------------------------------
[...truncated 354.66 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4a12bc63e680f7cf6b88d08741fcef8e
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 36'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 36'
Successfully started process 'Gradle Test Executor 36'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 14, 2022 6:45:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 14, 2022 6:45:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 14, 2022 6:45:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 14, 2022 6:45:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 14, 2022 6:45:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 6:45:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 14, 2022 6:45:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 14, 2022 6:45:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 6:45:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 14, 2022 6:45:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1943548225]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 14, 2022 6:45:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 14, 2022 6:45:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 6:45:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 14, 2022 6:45:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 14, 2022 6:45:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 6:45:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 14, 2022 6:45:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2092841477]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 14, 2022 6:45:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 14, 2022 6:45:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 6:45:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 14, 2022 6:45:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 14, 2022 6:45:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 6:45:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 14, 2022 6:45:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 14, 2022 6:45:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 14, 2022 6:45:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 14, 2022 6:46:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 14, 2022 6:46:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 14, 2022 6:46:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2564297979298479521.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-1i3KcnduUu4SL8GhNhg3XmP5t_7fpyPJBrzZlz5hmXI.jar
    Jan 14, 2022 6:46:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 14, 2022 6:46:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 14, 2022 6:46:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140658 bytes, hash 9d4b4e11fc2990f6faf20cc99f2d9446788c7ec33bd3ce92eb04ce1e4300351b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-nUtOEfwpkPb68gzJny2URniMfsM7086S6wTOHkMANRs.pb
    Jan 14, 2022 6:46:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 14, 2022 6:46:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 14, 2022 6:46:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 14, 2022 6:46:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 14, 2022 6:46:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-14_10_46_07-13738244675331493232?project=apache-beam-testing
    Jan 14, 2022 6:46:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-14_10_46_07-13738244675331493232
    Jan 14, 2022 6:46:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-14_10_46_07-13738244675331493232
    Jan 14, 2022 6:46:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-14T18:46:11.138Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 14, 2022 6:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:46:19.810Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 14, 2022 6:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:46:20.465Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 14, 2022 6:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:46:20.506Z: Expanding GroupByKey operations into optimizable parts.
    Jan 14, 2022 6:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:46:20.522Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 14, 2022 6:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:46:20.594Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 14, 2022 6:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:46:20.631Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 14, 2022 6:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:46:20.680Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 14, 2022 6:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:46:21.062Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 14, 2022 6:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:46:21.128Z: Starting 5 workers in us-central1-f...
    Jan 14, 2022 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:46:35.159Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 14, 2022 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:47:06.527Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 14, 2022 6:47:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:47:31.544Z: Workers have started successfully.
    Jan 14, 2022 6:47:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:47:31.569Z: Workers have started successfully.
    Jan 14, 2022 6:48:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:48:05.700Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 14, 2022 6:48:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:48:06.077Z: Cleaning up.
    Jan 14, 2022 6:48:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:48:06.377Z: Stopping worker pool...
    Jan 14, 2022 6:50:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:50:32.965Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 14, 2022 6:50:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T18:50:33.021Z: Worker pool stopped.
    Jan 14, 2022 6:50:40 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-14_10_46_07-13738244675331493232 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1231f98d-2b47-4b8d-9230-3a205ac363b7 and timestamp: 2022-01-14T18:50:40.090000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.154

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 14, 2022 6:50:40 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 36 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.008 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.004 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 54.161 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 19s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://scans.gradle.com/s/bgs4zhkngucbk

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2912

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2912/display/redirect?page=changes>

Changes:

[heejong] [BEAM-13455] Remove duplicated artifacts when using multiple

[noreply] [BEAM-12572] Run java examples on multiple runners (#16450)


------------------------------------------
[...truncated 352.13 KB...]
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 14, 2022 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 14, 2022 12:45:07 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 14, 2022 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 14, 2022 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 14, 2022 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 14, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 14, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 14, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 14, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 14, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 14, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 14, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 14, 2022 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1046044067]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 14, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 14, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 14, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 14, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 14, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 14, 2022 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 14, 2022 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 14, 2022 12:45:17 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 14, 2022 12:45:17 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 14, 2022 12:45:17 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7053486896940050074.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-rQz17VViB5RajT7mm0Arcv9W5u-myD_zMBmtAs_ZPME.jar
    Jan 14, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 1 seconds
    Jan 14, 2022 12:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 14, 2022 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash 9035f358e3cb8c3416984cfadb314126e60b3c6cb54b8db3bf974b9e4c421837> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-kDXzWOPLjDQWmEz62zFBJuYLPGy1S42zv5dLnkxCGDc.pb
    Jan 14, 2022 12:45:22 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 14, 2022 12:45:22 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 14, 2022 12:45:22 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 14, 2022 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 14, 2022 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-14_04_45_22-9797264863595373969?project=apache-beam-testing
    Jan 14, 2022 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-14_04_45_22-9797264863595373969
    Jan 14, 2022 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-14_04_45_22-9797264863595373969
    Jan 14, 2022 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-14T12:45:26.048Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 14, 2022 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:45:33.097Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 14, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:45:33.680Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 14, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:45:33.714Z: Expanding GroupByKey operations into optimizable parts.
    Jan 14, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:45:33.762Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 14, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:45:33.831Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 14, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:45:33.853Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 14, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:45:33.879Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 14, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:45:34.200Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 14, 2022 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:45:34.270Z: Starting 5 workers in us-central1-f...
    Jan 14, 2022 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:45:40.853Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 14, 2022 12:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:46:32.968Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 14, 2022 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:46:58.850Z: Workers have started successfully.
    Jan 14, 2022 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:46:58.879Z: Workers have started successfully.
    Jan 14, 2022 12:47:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:47:29.875Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 14, 2022 12:47:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:47:30.005Z: Cleaning up.
    Jan 14, 2022 12:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:47:30.081Z: Stopping worker pool...
    Jan 14, 2022 12:49:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:49:57.589Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 14, 2022 12:49:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T12:49:57.628Z: Worker pool stopped.
    Jan 14, 2022 12:50:04 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-14_04_45_22-9797264863595373969 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1d3544b9-631a-4319-a425-ecc583345942 and timestamp: 2022-01-14T12:50:04.224000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.526

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 14, 2022 12:50:04 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 5 mins 1.418 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 43s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...

Publishing failed.

The build scan server appears to be unavailable.
Please check https://status.gradle.com for the latest service status.

If the service is reported as available, please report this problem via https://gradle.com/help/plugin and include the following via copy/paste:

----------
Gradle version: 7.3.2
Plugin version: 3.4.1
Request URL: https://status.gradle.com
Request ID: 0501a46f-8ecb-4961-85b0-db017553857c
Response status code: 405
Response server type: Varnish
----------

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2911

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2911/display/redirect?page=changes>

Changes:

[blais] python sdk examples: Fixed typo in wordcount example.


------------------------------------------
[...truncated 377.45 KB...]
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport-zerodep/3.2.7/d74cb05fabc57e9ec441dc39d0bc58ad649fad3d/docker-java-transport-zerodep-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-zerodep-3.2.7-O2a9rUb-899yDZMu4E8j9ItFOkNoggqbLP2xaLmOwJY.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.directory.server/apacheds-i18n/2.0.0-M15/71c61c84683152ec2a6a65f3f96fe534e304fa22/apacheds-i18n-2.0.0-M15.jar to gs://temp-storage-for-perf-tests/loadtests/staging/apacheds-i18n-2.0.0-M15-vTt87Of8Y2TLzjK57dDpYoo-iJxqk83v8bXiEx4qAHw.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.codahale.metrics/metrics-core/3.0.1/1e98427c7f6e53363b598e2943e50903ce4f3657/metrics-core-3.0.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/metrics-core-3.0.1-sORikCIn9yYdDRpjtquyitz3EUQFjiQ8wJJ9NgofrqE.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.directory.api/api-util/1.0.0-M20/a871abf060b3cf83fc6dc4d7e3d151fce50ac3cb/api-util-1.0.0-M20.jar to gs://temp-storage-for-perf-tests/loadtests/staging/api-util-1.0.0-M20-_TL9BHzPFDxY0JO1iBGqgeU5-M-DwRh4CfGiQaHfEtE.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.zaxxer/HikariCP-java7/2.4.12/c5f7070c3ce390bd0b4c842fc75f4b698152dee2/HikariCP-java7-2.4.12.jar to gs://temp-storage-for-perf-tests/loadtests/staging/HikariCP-java7-2.4.12-uxHtPh-bbormGnHE8ssIucJ3h794-xqf8zUhdYeFbrc.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.directory.api/api-asn1-api/1.0.0-M20/5e6486ffa3125ba44dc410ead166e1d6ba8ac76d/api-asn1-api-1.0.0-M20.jar to gs://temp-storage-for-perf-tests/loadtests/staging/api-asn1-api-1.0.0-M20-SEqvS4iLDraZ2VvqJlwtW26-yVHXDlxfdpHNUt1Mgpg.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-csv/1.8/37ca9a9aa2d4be2599e55506a6d3170dd7a3df4/commons-csv-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-csv-1.8-qL1WZS7UZo2dWjOZSuUvWbnjnI6w68tmhOaK7udXmmE.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/commons-compiler/3.0.11/f2a6ec7fbc929c9fc87ff8bb486c0574951c5b09/commons-compiler-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-compiler-3.0.11-DxpPXyZccBoxkzJErnBF_O8YtPpZUEF-Je5wvlDd2s8.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/janino/3.0.11/e699e368095379ba0402ea1780a87fcaea16e68f/janino-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/janino-3.0.11-kje3HSMpGA5ZIQ6aqhAO4xNFTvCuWIYIx1yxkxlZG-E.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.spotbugs/spotbugs-annotations/3.1.9/2ef5127efcc1a899aab8c66d449a631c9a99c469/spotbugs-annotations-3.1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/spotbugs-annotations-3.1.9-aMfEa0KZ6Ug34jaudC85mQGpUP6RD-PKcQAmdTtd0uE.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.alibaba/fastjson/1.2.69/6cb063f1d527ff65bdbb9ea74888a5ffc3f92197/fastjson-1.2.69.jar to gs://temp-storage-for-perf-tests/loadtests/staging/fastjson-1.2.69-KniRdEoDeOAJmpuB3b7U0uGn6GMvb4b4_8-jg-UeScg.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.jackson/jackson-xc/1.9.13/e3480072bc95c202476ffa1de99ff7ee9149f29c/jackson-xc-1.9.13.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jackson-xc-1.9.13-LSkF_Ox9HFW3dZlWF2hduwNnI1BwTZ5AtJLqtbVNC-c.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.jackson/jackson-jaxrs/1.9.13/534d72d2b9d6199dd531dfb27083dd4844082bba/jackson-jaxrs-1.9.13.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jackson-jaxrs-1.9.13-F3BXCmulyHpHlcCutA7nxf5eMd9k7x1HlaDUJ3lrhLs.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.jersey/jersey-json/1.9/1aa73e1896bcc7013fed247157d7f676226eb432/jersey-json-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jersey-json-1.9-zF1TX0PO8NHEZyQJYarjWAGoN6sBAxnnQbLHpmWPP9Y.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-configuration/commons-configuration/1.6/32cadde23955d7681b0d94a2715846d20b425235/commons-configuration-1.6.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-configuration-1.6-RrcbllYVT2oW6ksdyEAmtSqTBfjv8EaitGVfoXOOXu4.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-digester/commons-digester/1.8/dc6a73fdbd1fa3f0944e8497c6c872fa21dca37e/commons-digester-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-digester-1.8-BWYjcwRPPf8RJWe3u136EXTpHgdMDHJ7RBJ4gBP0nVY.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-beanutils/commons-beanutils/1.9.4/d52b9abcd97f38c81342bb7e7ae1eee9b73cba51/commons-beanutils-1.9.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-beanutils-1.9.4-fZOMgXiQKARcCMBl6UvnX8KAUnYg1b1itRnVg4UyNoo.jar
    Jan 14, 2022 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs-client/2.10.1/b96dfefae8ddffe72b051a639cb11f4fe40f06ae/hadoop-hdfs-client-2.10.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/hadoop-hdfs-client-2.10.1-PY6m6joD2T6vr-wkECaRkhrDk3J5PT6sa_ybU6-bjkQ.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.visible-assertions/visible-assertions/2.1.2/20d31a578030ec8e941888537267d3123c2ad1c1/visible-assertions-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/visible-assertions-2.1.2-RQSulosjfNzcto_1sHqmOr5JkvkHp3w9YSCqm5BBQBw.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.duct-tape/duct-tape/1.0.8/92edc22a9ab2f3e17c9bf700aaee377d50e8b530/duct-tape-1.0.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/duct-tape-1.0.8-Mc7xLd7JedH4bXz3CMQaF9pSPQXGhf1mQunQsq3bckA.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-annotations/2.10.1/573603581e99e21206d251a3392ef750e9bc18ab/hadoop-annotations-2.10.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/hadoop-annotations-2.10.1-OFIe3ZPg29De8bZXcD3x9HhwgIudOezUAtSqPFrPFkI.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-cli/commons-cli/1.2/2bf96b7aa8b611c177d329452af1dc933e14501c/commons-cli-1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-cli-1.2-582JUZVtNJtWi3zP1PWyUpqMET5nwysCj1L_2jcSWdk.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/xmlenc/xmlenc/0.52/d82554efbe65906d83b3d97bd7509289e9db561a/xmlenc-0.52.jar to gs://temp-storage-for-perf-tests/loadtests/staging/xmlenc-0.52-KCrhhfwv8n2ncUr5liiXwJz--vuIByIZxKL5xzYWwCY.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mortbay.jetty/jetty-sslengine/6.1.26/60367999cee49a3b09fa86bdcb52310b6c896014/jetty-sslengine-6.1.26.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jetty-sslengine-6.1.26-nF9rsWi6AbldJQtX8GHICU4c6cia5OdzSSussXGS6oc.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.2/8ad72fe39fa8c91eaaf12aadb21e0c3661fe26d5/commons-collections-3.2.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-collections-3.2.2-7urpF5FxRKaKdB1MDf9mqlxcX9hVk_8he87T_Iyng7g.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.servlet.jsp/jsp-api/2.1/63f943103f250ef1f3a4d5e94d145a0f961f5316/jsp-api-2.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jsp-api-2.1-VF9OfcZ4_7TPi9D9QLSkRwpAmnh8DqfQrS8I1WESmHs.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/log4j/log4j/1.2.17/5af35056b4d257e4b64b9e8069c0746e8b08629f/log4j-1.2.17.jar to gs://temp-storage-for-perf-tests/loadtests/staging/log4j-1.2.17-HTFpZEVpdyBScJF1Q2kIKmZRvUl4G2AF3rlOVnU0Bvk.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.woodstox/stax2-api/4.2.1/a3f7325c52240418c2ba257b103c3c550e140c83/stax2-api-4.2.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/stax2-api-4.2.1-Z4Vn5ItRpCxlxpnyZlOa09Z21LGlsK19iezoudV3JXk.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.woodstox/woodstox-core/5.0.3/10aa199207fda142eff01cd61c69244877d71770/woodstox-core-5.0.3.jar to gs://temp-storage-for-perf-tests/loadtests/staging/woodstox-core-5.0.3-ocBLZPv-IK6fLGCjvxYz_tZoiuMZNba9SkV6G7sugtQ.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.htrace/htrace-core4/4.1.0-incubating/12b3e2adda95e8c41d9d45d33db075137871d2e2/htrace-core4-4.1.0-incubating.jar to gs://temp-storage-for-perf-tests/loadtests/staging/htrace-core4-4.1.0-incubating-XUW3kEhXw-StNrO8xXvi0sXzCMabX2pYvYaqfUiiXvY.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.yetus/audience-annotations/0.12.0/e0efa60318229590103e31c69ebdaae56d903644/audience-annotations-0.12.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/audience-annotations-0.12.0-_7EB_AZjYP88d0V8kn_Xln-wlqbungRqtwcUR9ggjvw.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.xml.bind/jaxb-api/2.2.2/aeb3021ca93dde265796d82015beecdcff95bf09/jaxb-api-2.2.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jaxb-api-2.2.2-MCM99iFfuYLYeE3pHTB1lnSM6pjW1QIpPHw-hcFpcTc.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.squareup.okhttp/okhttp/2.7.5/7a15a7db50f86c4b64aa3367424a60e3a325b8f1/okhttp-2.7.5.jar to gs://temp-storage-for-perf-tests/loadtests/staging/okhttp-2.7.5-iKyf0btR-CvMZkzB65wiXJDcQ4nWYCMbTMc3vr_n0Ko.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.parquet/parquet-jackson/1.12.0/cf19a7197bda364334325a8a1cde5bcf5889773d/parquet-jackson-1.12.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/parquet-jackson-1.12.0-5L8vHgq0ebpIlxhdwDV_0qW9_zAN2aUQ74t7ayZ0tzg.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-net/commons-net/3.1/2298164a7c2484406f2aa5ac85b205d39019896f/commons-net-3.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-net-3.1-NKWNbYClB0gwfmdOwntEEeZTb9EueL7EKOsu5JoSMAc.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna/5.5.0/e0845217c4907822403912ad6828d8e0b256208/jna-5.5.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-5.5.0-swj66_5O1AnehBDgpjLRZLISawNfbqz_lo05CMr7TZ4.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna-platform/4.0.0/deb6bf66918989b50209b8c9aaf3b2561af7f011/jna-platform-4.0.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-platform-4.0.0-B21i7Yfna9yzdQ_-gKpJXuIRK9chJmOVTWVH9IJEkuk.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.minidev/json-smart/2.3/7396407491352ce4fa30de92efb158adb76b5b/json-smart-2.3.jar to gs://temp-storage-for-perf-tests/loadtests/staging/json-smart-2.3-kD9IyKpMP2QmRAuNMt6J-h3COxFpq94l5OHQaKpncIs.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.nimbusds/nimbus-jose-jwt/7.9/b608cd5e306d67bb58fe5bd687387aa0671687a6/nimbus-jose-jwt-7.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/nimbus-jose-jwt-7.9-tPWEU-GAqYHrdEoZtNVq-xLxDD3TXnUxzFjubL9b8oY.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/commons-pool/commons-pool/1.6/4572d589699f09d866a226a14b7f4323c6d8f040/commons-pool-1.6.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-pool-1.6-RsQrSjjcay21Op7lySxj2xA2ZdVmlOLPziyV1RpoYMw.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.minidev/accessors-smart/1.2/c592b500269bfde36096641b01238a8350f8aa31/accessors-smart-1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/accessors-smart-1.2-DHwmXWL8AHEk3DK5EzbpxCcmUdYpvF-hpOTjvHWOsuQ.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.ow2.asm/asm/5.0.4/da08b8cce7bbf903602a25a3a163ae252435795/asm-5.0.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/asm-5.0.4-iWYY7YrmJwJSGni8e-QrfEkaCOaSChX4mj7N7DHpoiA.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport/3.2.7/315903a129f530422747efc163dd255f0fa2555e/docker-java-transport-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-3.2.7-ynNXlSXhDSyiLCXkA4c9E9_HGeKB7-wbpEzmswFrI-A.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/jline/jline/0.9.94/99a18e9a44834afdebc467294e1138364c207402/jline-0.9.94.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jline-0.9.94-2N8P-xLYfKh2JxzaTVmz_rlBI4gsG-F2O3-vLgoLDLs.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty/3.10.6.Final/18ed04a0e502896552854926e908509db2987a00/netty-3.10.6.Final.jar to gs://temp-storage-for-perf-tests/loadtests/staging/netty-3.10.6.Final-h2ilD749k6iNjmAA6l1o4w9Q3JFbN2TDxYcPcMT7O0k.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.squareup.okio/okio/1.6.0/98476622f10715998eacf9240d6b479f12c66143/okio-1.6.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/okio-1.6.0-EUvcH0czimi8vJWr8vXNxyvu7JGBLy_Ne1IcGTeHYmY.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.fusesource.leveldbjni/leveldbjni-all/1.8/707350a2eeb1fa2ed77a32ddb3893ed308e941db/leveldbjni-all-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/leveldbjni-all-1.8-wpchOw5vk5IwWVJ1PzCZpMAucLNlYmb-AYZ-e2wWD_4.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.activation/activation/1.1/e6cb541461c2834bdea3eb920f1884d1eb508b50/activation-1.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/activation-1.1-KIHHnJ1u8BxY5ivuoT6dGsi4uqFvL8GYrW5ndt79zdM.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.xml.stream/stax-api/1.0-2/d6337b0de8b25e53e81b922352fbea9f9f57ba0b/stax-api-1.0-2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/stax-api-1.0-2-6McOvXb5gslYKoLvgs9s4Up9WKSk3KXLe3_JiMgAibc.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mortbay.jetty/jetty/6.1.26/2f546e289fddd5b1fab1d4199fbb6e9ef43ee4b0/jetty-6.1.26.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jetty-6.1.26-IQkdOpwTSfZA_cQhUEpgTAQO2JCH7MEq--MjUzJu1OU.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mortbay.jetty/jetty-util/6.1.26/e5642fe0399814e1687d55a3862aa5a3417226a9/jetty-util-6.1.26.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jetty-util-6.1.26-m5dM4rmfSCVLdhJjN9xFshIm84Oq7WFvWXgK2vFnwEc.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.jersey/jersey-client/1.9/d3c4b2b5f89db32c96ceddcb863684821910a7bb/jersey-client-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jersey-client-1.9-iuA68NBsRqUbZdEj7EDyRdppCZGqNmnO9HZ9uPNvvmg.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.jersey/jersey-core/1.9/8341846f18187013bb9e27e46b7ee00a6395daf4/jersey-core-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jersey-core-1.9-LG0OyI_Iw2y0FjfZwA0GmMIstrahN_pSbveC4A0iZbw.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.jersey.contribs/jersey-guice/1.9/5963c28c47df7e5d6ad34cec80c071c368777f7b/jersey-guice-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jersey-guice-1.9-VE_JLSYlMyqajuqnpydM8a9nA5NqUK-oDZKnggCn3jQ.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.jersey/jersey-server/1.9/3a6ea7cc5e15c824953f9f3ece2201b634d90d18/jersey-server-1.9.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jersey-server-1.9-Pe2RsZgHdWG9UfbARCyc1wt1TYsxthr69Ei9qdAYSPA.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.inject/guice/3.0/9d84f15fe35e2c716a02979fb62f50a29f38aefa/guice-3.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/guice-3.0-GlnQQh_9NVzAtwtC3xwumvdEyKLQyS2jefX8ovB_HSI.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.servlet/servlet-api/2.5/5959582d97d8b61f4d154ca9e495aafd16726e34/servlet-api-2.5.jar to gs://temp-storage-for-perf-tests/loadtests/staging/servlet-api-2.5-xljqNgpw-u6ttm-zyQpwLkFCoKt3aPmumChnjg2a1Nw.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.geronimo.specs/geronimo-jcache_1.0_spec/1.0-alpha-1/ef92fbbc3a3a7f45bf021bcb75df2c6e0660dfac/geronimo-jcache_1.0_spec-1.0-alpha-1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/geronimo-jcache_1.0_spec-1.0-alpha-1-AHChLlj0kblXGTkTJSmaYpRTDubDziXlC9yYsLcAlmw.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.stephenc.jcip/jcip-annotations/1.0-1/ef31541dd28ae2cefdd17c7ebf352d93e9058c63/jcip-annotations-1.0-1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jcip-annotations-1.0-1-T8z_g4Kq_FiZYsTtsmL2qlleNPHhHmEFfRxqluj8cyM.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/javax.inject/javax.inject/1/6975da39a7040257bd51d21a231b76c915872d38/javax.inject-1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/javax.inject-1-kcdwRKUMSBY2wy2Rb9ickRinIZU5BFLIEGUID5V95_8.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.jettison/jettison/1.1/1a01a2a1218fcf9faa2cc2a6ced025bdea687262/jettison-1.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jettison-1.1-N3lAKIsGQ8SHgBN_b2hXiTfh6lyitzgwqCDFCnt-2AE.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.microsoft.sqlserver/mssql-jdbc/6.2.1.jre7/2912ca3a5ee674ec79cd6914b9f5d6282d083eb8/mssql-jdbc-6.2.1.jre7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mssql-jdbc-6.2.1.jre7-nPollFCuNHHS5uLD2K78ziNuPa74s3NNIdyTw6W76AY.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/aopalliance/aopalliance/1.0/235ba8b489512805ac13a8f9ea77a1ca5ebe3e8/aopalliance-1.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/aopalliance-1.0-Ct3sZw_tzT8RPFyAkdeDKA0j9146y4QbYanNsHk3agg.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/asm/asm/3.1/c157def142714c544bdea2e6144645702adf7097/asm-3.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/asm-3.1-Mz_1NpBDl1t-AxuLJyBpN0QYVHOOA4wfR_mNByogQ3o.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.xml.bind/jaxb-impl/2.3.3/3758e8c1664979749e647a9ca8c7ea1cd83c9b1e/jaxb-impl-2.3.3.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jaxb-impl-2.3.3-5ReNDHlIJH91oTxom_NvTV1JEKEh9xKqOyCulDdwadg.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.sonatype.sisu.inject/cglib/2.2.1-v20090111/7ce5e983fd0e6c78346f4c9cbfa39d83049dda2/cglib-2.2.1-v20090111.jar to gs://temp-storage-for-perf-tests/loadtests/staging/cglib-2.2.1-v20090111-QuHfsmvsvxpjPyW0fjn8xCK4XnfkwEaNmkT4hfX6C-I.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.jcraft/jsch/0.1.55/bbd40e5aa7aa3cfad5db34965456cee738a42a50/jsch-0.1.55.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jsch-0.1.55-1JKxWm0uo_HMOcQiyVPEDBIokHPb6DYNmMD2-ex0_EQ.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.jamesmurty.utils/java-xmlbuilder/0.4/ac5962e48cdee3a0a6e1f8e00fcb594747ac5aaf/java-xmlbuilder-0.4.jar to gs://temp-storage-for-perf-tests/loadtests/staging/java-xmlbuilder-0.4-aB5TxP_Vn6EgaIA7JZ46g9Q_B6R8ES50ihh97heesx8.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.sun.activation/jakarta.activation/1.2.2/74548703f9851017ce2f556066659438019e7eb5/jakarta.activation-1.2.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jakarta.activation-1.2.2-AhVnc-SunQSNFKVq011kS-6fEFKnkdBy3z3tPGVubho.jar
    Jan 14, 2022 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.ehcache/ehcache/3.3.1/8de32b5df06074d4d9ec48a0df9a9df290e2e5dd/ehcache-3.3.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/ehcache-3.3.1-uzutJRm0_eRWyjx_BqRCNdo0dpn89MspvJ9qDLCWBPY.jar
    Jan 14, 2022 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 253 files cached, 104 files newly uploaded in 3 seconds
    Jan 14, 2022 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 14, 2022 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash 58857f74122c899cc5844cf77e5266aed74f1a0e2de40b44ae7851670d7474b4> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-WIV_dBIsiZzFhEz3flJmrtdPGg4t5AtErnhRZw10dLQ.pb
    Jan 14, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 14, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 14, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 14, 2022 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 14, 2022 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-13_22_45_30-11912193549035344149?project=apache-beam-testing
    Jan 14, 2022 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-13_22_45_30-11912193549035344149
    Jan 14, 2022 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-13_22_45_30-11912193549035344149
    Jan 14, 2022 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-14T06:45:34.214Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 14, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:45:43.146Z: Worker configuration: e2-standard-2 in us-central1-b.
    Jan 14, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:45:43.759Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 14, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:45:43.793Z: Expanding GroupByKey operations into optimizable parts.
    Jan 14, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:45:43.826Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 14, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:45:43.887Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 14, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:45:43.913Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 14, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:45:43.938Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 14, 2022 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:45:44.386Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 14, 2022 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:45:44.471Z: Starting 5 workers in us-central1-b...
    Jan 14, 2022 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:45:47.469Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 14, 2022 6:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:46:35.466Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 14, 2022 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:47:02.642Z: Workers have started successfully.
    Jan 14, 2022 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:47:02.672Z: Workers have started successfully.
    Jan 14, 2022 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:47:35.954Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 14, 2022 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:47:36.136Z: Cleaning up.
    Jan 14, 2022 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:47:36.237Z: Stopping worker pool...
    Jan 14, 2022 6:50:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:49:58.456Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 14, 2022 6:50:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T06:49:58.520Z: Worker pool stopped.
    Jan 14, 2022 6:50:06 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-13_22_45_30-11912193549035344149 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4703acfe-2367-4a82-a2f5-1af11a0a15b0 and timestamp: 2022-01-14T06:50:06.847000000Z:
                     Metric:                    Value:
                   read_time                     8.616
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 14, 2022 6:50:06 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 59.431 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 45s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/734l4fggoh3na

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2910

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2910/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12464] Change ProtoSchemaTranslator beam schema creation to match

[noreply] Introduce the notion of a JoinIndex for fewer shuffles. (#16101)

[noreply] Merge pull request #16467 from [BEAM-12164]: SpannerIO

[noreply] Merge pull request #16385 from [BEAM-13535] [Playground] add cancel

[noreply] Merge pull request #16485 from [BEAM-13486] [Playground] For unit tests


------------------------------------------
[...truncated 350.39 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 7482713221220e08caf9d5c03e7cb42a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 14, 2022 12:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 14, 2022 12:45:20 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 14, 2022 12:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 14, 2022 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 14, 2022 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 14, 2022 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 14, 2022 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 14, 2022 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 14, 2022 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 14, 2022 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 14, 2022 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 14, 2022 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 14, 2022 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1046044067]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 14, 2022 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 14, 2022 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 14, 2022 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 14, 2022 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 14, 2022 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 14, 2022 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 14, 2022 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 14, 2022 12:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 14, 2022 12:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 14, 2022 12:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 14, 2022 12:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8065491286770388294.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-f7jFGIeMil5Ct8_a2RRYr3JTjXsg-YKbi5Pccuo20J0.jar
    Jan 14, 2022 12:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 14, 2022 12:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 14, 2022 12:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140661 bytes, hash a8168537401ab8f1e08d0d2ab2ecb91a55d7ac7482d4090df9f886f4428a061a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-qBaFN0AauPHgjQ0qsuy5GlXXrHSC1AkN-fiG9EKKBho.pb
    Jan 14, 2022 12:45:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 14, 2022 12:45:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 14, 2022 12:45:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 14, 2022 12:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 14, 2022 12:45:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-13_16_45_35-16812034470397968115?project=apache-beam-testing
    Jan 14, 2022 12:45:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-13_16_45_35-16812034470397968115
    Jan 14, 2022 12:45:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-13_16_45_35-16812034470397968115
    Jan 14, 2022 12:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-14T00:45:38.990Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 14, 2022 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:45:47.364Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 14, 2022 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:45:47.993Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 14, 2022 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:45:48.031Z: Expanding GroupByKey operations into optimizable parts.
    Jan 14, 2022 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:45:48.070Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 14, 2022 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:45:48.135Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 14, 2022 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:45:48.161Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 14, 2022 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:45:48.185Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 14, 2022 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:45:48.711Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 14, 2022 12:45:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:45:48.779Z: Starting 5 workers in us-central1-a...
    Jan 14, 2022 12:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:46:04.801Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 14, 2022 12:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:46:45.835Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 14, 2022 12:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:47:11.392Z: Workers have started successfully.
    Jan 14, 2022 12:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:47:11.435Z: Workers have started successfully.
    Jan 14, 2022 12:47:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:47:40.169Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 14, 2022 12:47:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:47:40.324Z: Cleaning up.
    Jan 14, 2022 12:47:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:47:40.408Z: Stopping worker pool...
    Jan 14, 2022 12:50:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:50:07.708Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 14, 2022 12:50:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-14T00:50:07.759Z: Worker pool stopped.
    Jan 14, 2022 12:50:13 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-13_16_45_35-16812034470397968115 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 30a9e3b3-140a-4e03-85a7-cfac1b8e91de and timestamp: 2022-01-14T00:50:13.752000000Z:
                     Metric:                    Value:
                   read_time                     7.256
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 14, 2022 12:50:13 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 58.058 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 52s
165 actionable tasks: 103 executed, 60 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/s4eegmg3czvmi

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2909

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2909/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13480] Increase pipeline timeout for

[noreply] Stronger typing inference for CoGBK. (#16465)


------------------------------------------
[...truncated 356.83 KB...]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 13, 2022 6:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 13, 2022 6:44:58 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 13, 2022 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 13, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 13, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 13, 2022 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 13, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 13, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 13, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 13, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 13, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1046044067]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 13, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 13, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 13, 2022 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 13, 2022 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 13, 2022 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 13, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 13, 2022 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 13, 2022 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test9156218825089533647.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-tg54PoiHlfFOYK5sBC4DS-6ZY89Z2XTHAmvexf0Ej4w.jar
    Jan 13, 2022 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 1 seconds
    Jan 13, 2022 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 13, 2022 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash 31166ab209fc158f59663460f6f6b4a8d7916262ada782c3f9c315ebea21ebdb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-MRZqsgn8FY9ZZjRg9va0qNeRYmKtp4LD-cMV6-oh69s.pb
    Jan 13, 2022 6:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 13, 2022 6:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 13, 2022 6:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 13, 2022 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 13, 2022 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-13_10_45_14-17887481091646140131?project=apache-beam-testing
    Jan 13, 2022 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-13_10_45_14-17887481091646140131
    Jan 13, 2022 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-13_10_45_14-17887481091646140131
    Jan 13, 2022 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-13T18:45:19.437Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 13, 2022 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:45:30.180Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 13, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:45:30.881Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 13, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:45:30.976Z: Expanding GroupByKey operations into optimizable parts.
    Jan 13, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:45:31.003Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 13, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:45:31.056Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 13, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:45:31.082Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 13, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:45:31.106Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 13, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:45:31.458Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 13, 2022 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:45:31.587Z: Starting 5 workers in us-central1-a...
    Jan 13, 2022 6:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:45:31.880Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 13, 2022 6:46:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:46:18.452Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Jan 13, 2022 6:46:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:46:18.475Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Jan 13, 2022 6:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:46:28.906Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Jan 13, 2022 6:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:46:28.942Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Jan 13, 2022 6:46:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:46:39.278Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Jan 13, 2022 6:46:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:46:39.319Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Jan 13, 2022 6:46:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:46:45.430Z: Workers have started successfully.
    Jan 13, 2022 6:46:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:46:45.460Z: Workers have started successfully.
    Jan 13, 2022 6:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:46:49.749Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Jan 13, 2022 6:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:46:49.784Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Jan 13, 2022 6:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:47:25.749Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 13, 2022 6:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:47:26.020Z: Cleaning up.
    Jan 13, 2022 6:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:47:26.096Z: Stopping worker pool...
    Jan 13, 2022 6:50:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:50:57.767Z: Autoscaling: Resized worker pool from 4 to 0.
    Jan 13, 2022 6:50:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T18:50:57.938Z: Worker pool stopped.
    Jan 13, 2022 6:51:06 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-13_10_45_14-17887481091646140131 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ea255dfa-29e8-4d4b-8b9b-9aad3d2587d8 and timestamp: 2022-01-13T18:51:06.298000000Z:
                     Metric:                    Value:
                   read_time                    16.751
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 13, 2022 6:51:06 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 62 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.001 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.002 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 6 mins 12.004 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 47s
165 actionable tasks: 107 executed, 56 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/4jgqe7oeysxqq

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2908

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2908/display/redirect>

Changes:


------------------------------------------
[...truncated 347.37 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 13, 2022 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 13, 2022 12:45:14 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 13, 2022 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 13, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 12:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 13, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 12:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 13, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 13, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 13, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 13, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 13, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 12:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 13, 2022 12:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1046044067]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 13, 2022 12:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 12:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 12:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 13, 2022 12:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 12:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 12:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 13, 2022 12:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 13, 2022 12:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 13, 2022 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 13, 2022 12:45:31 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 13, 2022 12:45:31 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 13, 2022 12:45:31 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test963401614125994815.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-EYZXJ0ZiPegVOV6BhyohH1BiIfA5GMIdz91iLrDqWHo.jar
    Jan 13, 2022 12:45:32 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 13, 2022 12:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 13, 2022 12:45:32 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash d2043c5491f9d30c92d9679a11f08d1883aa99c2ca0b99c63278dfc09c3d63f1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-0gQ8VJH50wyS2WeaEfCNGIOqmcLKC5nGMnjfwJw9Y_E.pb
    Jan 13, 2022 12:45:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 13, 2022 12:45:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 13, 2022 12:45:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 13, 2022 12:45:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 13, 2022 12:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-13_04_45_36-9347011079743844205?project=apache-beam-testing
    Jan 13, 2022 12:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-13_04_45_36-9347011079743844205
    Jan 13, 2022 12:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-13_04_45_36-9347011079743844205
    Jan 13, 2022 12:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-13T12:45:41.810Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 13, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:45:49.269Z: Worker configuration: e2-standard-2 in us-central1-a.
    Jan 13, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:45:50.177Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 13, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:45:50.226Z: Expanding GroupByKey operations into optimizable parts.
    Jan 13, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:45:50.260Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 13, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:45:50.348Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 13, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:45:50.394Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 13, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:45:50.436Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 13, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:45:50.875Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 13, 2022 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:45:50.961Z: Starting 5 workers in us-central1-a...
    Jan 13, 2022 12:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:45:59.815Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 13, 2022 12:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:46:41.237Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 13, 2022 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:47:07.365Z: Workers have started successfully.
    Jan 13, 2022 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:47:07.400Z: Workers have started successfully.
    Jan 13, 2022 12:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:47:33.663Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 13, 2022 12:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:47:33.797Z: Cleaning up.
    Jan 13, 2022 12:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:47:33.868Z: Stopping worker pool...
    Jan 13, 2022 12:50:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:50:00.314Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 13, 2022 12:50:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T12:50:00.360Z: Worker pool stopped.
    Jan 13, 2022 12:50:10 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-13_04_45_36-9347011079743844205 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 17ad0790-1cd8-4f9a-82cf-61f528b1b480 and timestamp: 2022-01-13T12:50:10.184000000Z:
                     Metric:                    Value:
                   read_time                     5.521
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 13, 2022 12:50:10 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.044 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 5 mins 4.731 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 47s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/fqnb6vl2gbits

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2907

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2907/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13399] Move service liveness polling to Runner type (#16487)


------------------------------------------
[...truncated 348.52 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.internal.worker.tmpdir=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/tmp/integrationTest/work> -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/7.3.2/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Jan 13, 2022 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Jan 13, 2022 6:44:54 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 13, 2022 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 356 files. Enable logging at DEBUG level to see which files will be staged.
    Jan 13, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 13, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 13, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@18392277]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Jan 13, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 13, 2022 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 13, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 13, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 13, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1046044067]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 13, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Jan 13, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 13, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 13, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 13, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Jan 13, 2022 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Jan 13, 2022 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Jan 13, 2022 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 357 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Jan 13, 2022 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.37.0-SNAPSHOT-HoKmZanH_5O4w8vR-XjUi706ly4Ab8VQuPSZeLzefik.jar
    Jan 13, 2022 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4239810426241241148.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-DGpG51JIHLv9NXphohFDFJuAVA4tV5X_rBQRJy-1xBE.jar
    Jan 13, 2022 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 356 files cached, 1 files newly uploaded in 0 seconds
    Jan 13, 2022 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 13, 2022 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <140660 bytes, hash 7a8da4c853774a1ffe55e6ecb522dd135074cb42bf4bb29af1d436bb10bb70e7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-eo2kyFN3Sh_-VebstSLdE1B0y0K_S7Ka8dQ2uxC7cOc.pb
    Jan 13, 2022 6:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 13, 2022 6:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Jan 13, 2022 6:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Jan 13, 2022 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.37.0-SNAPSHOT
    Jan 13, 2022 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2022-01-12_22_45_12-13954362948564424449?project=apache-beam-testing
    Jan 13, 2022 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2022-01-12_22_45_12-13954362948564424449
    Jan 13, 2022 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2022-01-12_22_45_12-13954362948564424449
    Jan 13, 2022 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2022-01-13T06:45:15.409Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 13, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:45:24.120Z: Worker configuration: e2-standard-2 in us-central1-f.
    Jan 13, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:45:25.036Z: Expanding CoGroupByKey operations into optimizable parts.
    Jan 13, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:45:25.068Z: Expanding GroupByKey operations into optimizable parts.
    Jan 13, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:45:25.096Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Jan 13, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:45:25.157Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Jan 13, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:45:25.183Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Jan 13, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:45:25.216Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Jan 13, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:45:25.592Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 13, 2022 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:45:25.710Z: Starting 5 workers in us-central1-f...
    Jan 13, 2022 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:45:47.918Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 13, 2022 6:46:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:46:10.690Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Jan 13, 2022 6:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:46:37.456Z: Workers have started successfully.
    Jan 13, 2022 6:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:46:37.482Z: Workers have started successfully.
    Jan 13, 2022 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:47:07.416Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Jan 13, 2022 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:47:07.562Z: Cleaning up.
    Jan 13, 2022 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:47:07.642Z: Stopping worker pool...
    Jan 13, 2022 6:49:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:49:39.546Z: Autoscaling: Resized worker pool from 5 to 0.
    Jan 13, 2022 6:49:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2022-01-13T06:49:39.623Z: Worker pool stopped.
    Jan 13, 2022 6:49:46 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2022-01-12_22_45_12-13954362948564424449 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): bafa37c5-db63-4562-91c6-8624099b345a and timestamp: 2022-01-13T06:49:47.031000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      6.73

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 13, 2022 6:49:47 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 58.624 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
165 actionable tasks: 101 executed, 62 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/p2twvxt3otjd2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org